Latest Alerts

Syndicate content SANS Internet Storm Center, InfoCON: green
Updated: 52 min 52 sec ago

Detecting Queries to "odd" DNS Servers, (Tue, May 20th)

Tue, 05/20/2014 - 10:59

Usually, your operating system will be assigned a DNS server either via DHCP (or RAs in IPv6) or statically. The resolver library on a typical workstation will then go forward and pass all DNS lookups to this set of DNS servers. However, malware sometimes tries to use its own DNS servers, and blocking outbound port 53 traffic (udp and tcp) can help identify these hosts.

Brent, one of our readers, does just that and keeps finding infected machines that way. Just now, he is investigating a system that attempted to connect to the following name servers:

101.226.4.6
114.114.114.114
114.114.115.115
123.125.81.6
140.207.198.6
202.97.224.69
211.98.2.4
218.30.118.6
14.33.133.189

He has not identified the malware behind this yet, but no other system he is using ("we are running bluecoat web filter AND we're using OpenDNS AND I'm running snort"). Brent uses oak (http://ktools.org/oak/) to help him watch his logs and alert him of issues like this.

According to the Farsight Security passive DNS database, these IPs resolve to a number of "interesting" hostnames. I am just showing a few here (the full list is too long)

ns-facebook-[number]-[number].irl-dns.info   <- the [number] part appears to be a random number
*.v9dns.com    <- '*' to indicate various host names in this domain.
v2.3322pay.com
bjcgsm.com
sf5100.com
 


------------------
Johannes B. Ullrich, Ph.D.

SANS Technology Institute
Twitter

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

De-Clouding your Life: Things that should not go into the cloud., (Wed, May 7th)

Tue, 05/20/2014 - 06:03

A couple weeks ago, Dropbox announced that it invalidated some old "shared links" users used to share confidential documents, like tax returns [1] . The real question here is of course, how did these tax returns get exposed in the first place. 

Dropbox usually requires a username and a password to access documents, and even offers a two-factor solution as an option. But regardless, the user may allow a document to be access by others, who just know the "secret" link.

As we all know, the problem is "secret" links easily leak. But as users rely more on cloud services to share files, and passwords for each shared file are way too hard to set up, this is going to happen more and more. Dropbox isn't the only such service that offers this feature. In a recent discussion with some banks, the problem came up in that more and more customers attempt to share documents with the bank for things like mortgage applications. While the banks do refuse to accept documents that way, the pressure exists and I am sure other businesses with less regulatory pressure, will happily participate.

For a moment, lets assume the cloud service works "as designed" and your username and password is strong enough. Cloud services can be quite useful as a cheap "offsite backup", for example to keep a list of serial numbers of your possessions in case of a burglary or catastrophic event like a fire. But as soon as you start sharing documents, you run the risk of others not taking care of them as well as you would. May it be that their passwords are no good, or maybe they will let the "secret link" you gave them wander. 

Confidential personal, financial or medical information should probably not go into your cloud account. And if they do, encrypt before uploading them. 

Here are a couple of steps to de-cloud your life:

- setup an "ownCloud" server. It works very much like Dropbox with mobile clients available for Android and iOS. But you will have to run the server. I suggest you make it accessible via a VPN connection only. Sharepoint may be a similar solution for Windows folks.

- run your own mail server: This can be a real pain and even large companies move mail services to cloud providers (only to regret it later ...?). But pretty much all cloud mail providers will store your data in the clear, and in many ways they have to. Systems to provide real end-to-end encryption for cloud/web-based e-mail are still experimental at this point.

- Offsite backup at a friends/relatives house. With wide spread use of high speed home network connections, it is possible to setup a decent offsite backup system by "co-locating" a simple NAS somewhere. The disks on the NAS can be encrypted and the connection can use a VPN again.

- For Apple users, make local backups of your devices instead of using iCloud. iCloud stores backups unencrypted and all it takes for an attacker to retrieve a backup is your iCloud username/password.

Any other tips to de-cloud?

[1] https://blog.dropbox.com/2014/05/web-vulnerability-affecting-shared-links/

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

De-Clouding your Life: Things that should not go into the cloud., (Wed, May 7th)

Tue, 05/20/2014 - 06:03

A couple weeks ago, Dropbox announced that it invalidated some old "shared links" users used to share confidential documents, like tax returns [1] . The real question here is of course, how did these tax returns get exposed in the first place. 

Dropbox usually requires a username and a password to access documents, and even offers a two-factor solution as an option. But regardless, the user may allow a document to be access by others, who just know the "secret" link.

As we all know, the problem is "secret" links easily leak. But as users rely more on cloud services to share files, and passwords for each shared file are way too hard to set up, this is going to happen more and more. Dropbox isn't the only such service that offers this feature. In a recent discussion with some banks, the problem came up in that more and more customers attempt to share documents with the bank for things like mortgage applications. While the banks do refuse to accept documents that way, the pressure exists and I am sure other businesses with less regulatory pressure, will happily participate.

For a moment, lets assume the cloud service works "as designed" and your username and password is strong enough. Cloud services can be quite useful as a cheap "offsite backup", for example to keep a list of serial numbers of your possessions in case of a burglary or catastrophic event like a fire. But as soon as you start sharing documents, you run the risk of others not taking care of them as well as you would. May it be that their passwords are no good, or maybe they will let the "secret link" you gave them wander. 

Confidential personal, financial or medical information should probably not go into your cloud account. And if they do, encrypt before uploading them. 

Here are a couple of steps to de-cloud your life:

- setup an "ownCloud" server. It works very much like Dropbox with mobile clients available for Android and iOS. But you will have to run the server. I suggest you make it accessible via a VPN connection only. Sharepoint may be a similar solution for Windows folks.

- run your own mail server: This can be a real pain and even large companies move mail services to cloud providers (only to regret it later ...?). But pretty much all cloud mail providers will store your data in the clear, and in many ways they have to. Systems to provide real end-to-end encryption for cloud/web-based e-mail are still experimental at this point.

- Offsite backup at a friends/relatives house. With wide spread use of high speed home network connections, it is possible to setup a decent offsite backup system by "co-locating" a simple NAS somewhere. The disks on the NAS can be encrypted and the connection can use a VPN again.

- For Apple users, make local backups of your devices instead of using iCloud. iCloud stores backups unencrypted and all it takes for an attacker to retrieve a backup is your iCloud username/password.

Any other tips to de-cloud?

[1] https://blog.dropbox.com/2014/05/web-vulnerability-affecting-shared-links/

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

ISC StormCast for Tuesday, May 20th 2014 http://isc.sans.edu/podcastdetail.html?id=3985, (Tue, May 20th)

Mon, 05/19/2014 - 17:27
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

ISC StormCast for Tuesday, May 20th 2014 http://isc.sans.edu/podcastdetail.html?id=3985, (Tue, May 20th)

Mon, 05/19/2014 - 17:27
(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

ISC StormCast for Monday, May 19th 2014 http://isc.sans.edu/podcastdetail.html?id=3983, (Mon, May 19th)

Mon, 05/19/2014 - 08:04
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

ISC StormCast for Monday, May 19th 2014 http://isc.sans.edu/podcastdetail.html?id=3983, (Mon, May 19th)

Mon, 05/19/2014 - 08:04
(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

sed and awk will always rock, (Sun, May 18th)

Mon, 05/19/2014 - 08:03

Fresh off our discussion regarding PowerShell, now for something completely different. In order to bring balance to the force I felt I should share with you my recent use of sed, "the ultimate stream editor" and awk, "an extremely versatile programming language for working on files" to solve one of fourteen challenges in a recent CTF exercise I participated in.

The challenge included only a legitimate bitmap file (BMP) that had been modified via least siginficant bit (LSB) steganography and the following details. The BMP was modified to carry a message starting at the 101st byte and only in every 3rd byte thereafter. The challenge was therefore to recover the message and paste it as the answer for glory and prizes (not really, but pride points count). What was cool about this CTF is that while a number of my associates participated not one of us approached the challenge the same way. One used Excel with VB, another used AutoIT, and yet another wrote his own C#. Since I'm not as smart as any of these guys, I opted to trust the force and use our good and faithful servants sed and awk on my SIFT 3.0 VM along with a couple of my preferred editors (010 and TextPad) on my Windows host. I know, I know, "WTF, Russ, just do it on one system." I can say only that I am fixed in my ways and like to do certain things with certain tools, so I'm actually faster bouncing back and forth between systems. Here's what I did in seven short steps, with some details and screenshots. Note: I share this because it worked and I enjoyed it, not because I'm saying it's an optimal or elegant method.

1) I opened the .bmp in 010 Editor and first deleted bytes 1 through 100 given that the message starts at the 101st byte. Remember, if you choose to do this by offset the first byte is offset 0 and the 101st is the 100th offset. This critical point will be pounded (literally) into your head by Mike Poor when taking the GCIA track, which I can't recommend enough. Then under View chose Edit As and switched from Hex to Binary (remember we're working with the least significant bit). I then selected all binary, chose Copy As, and selected Copy As Binary Text which I saved as challenge13binaryRaw.txt.

2) I opened challenge13binaryRaw.txt in TextPad because I love its replace functionality. The binary text output from 010 Editor is separated by a space every 8 bits/1 byte. In TextPad I used a regular expression replacement to convert the text to a single column (replaced every space with a newline \n), which I saved as challenge13binaryRaw-column.txt.

3) I then used sed on challenge13binaryRaw-column.txt to print only every third byte, described in the challenge description as those containing the message, and saved it to every3rd.txt as follows: sed -n '1~3p' challenge13binaryRaw-column.txt > every3rd.txt. In this syntax, sed simply starts at the 1st line then prints every 3rd ('1~3p').

4) To then grab the least significant bit from each line of every3rd.txt I used awk as follows: awk '{print substr($0,8)}' every3rd.txt > lsb.txt. This tells awk to grab the 8th character of each line and print it out to lsb.txt, the 8th character representing the least significant bit in each 8 bit byte.

5) lsb.txt now contains only the message but I need to format it back into machine readable binary for translation to human readable text. Back to TextPad where I used another regex replacement to convert a long column of single bits back to one line and save it as lsb-oneline.txt. Replacing a carriage return (\r) with nothing will do exactly that.

6) In order for machine translation to successfully read the newly compiled message traffic, we now need to reintroduce a space between every 8 bits/ 1 byte which we can again accomplish with sed and save it to finalBinary.txt as follows: sed 's/\(.\{8\}\)/\1 /g' lsb-oneline.txt > finalBinary.txt

7) I then copied the content from finalBinary.txt into a binary translator and out popped the message.

It was actually the same short message looped many times through the BMP but I went for overkill extracting it not knowing the parameters other than those defined by the challenge description (no mention of how long the message was). A bit clunky to be sure but for you forensicators looking for ways to pull out messages or content embedded via LSB steganography, this approach might be useful. And no, I'm not telling you what the message was or sharing the BMP file in case the CTF administrators wish to use it again. :-) You'll want to brush up on your regex; one of my favorite resources is here.

Cheers and enjoy.

Russ McRee | @holisticinfosec

 

 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

sed and awk will always rock, (Sun, May 18th)

Mon, 05/19/2014 - 08:03

Fresh off our discussion regarding PowerShell, now for something completely different. In order to bring balance to the force I felt I should share with you my recent use of sed, "the ultimate stream editor" and awk, "an extremely versatile programming language for working on files" to solve one of fourteen challenges in a recent CTF exercise I participated in.

The challenge included only a legitimate bitmap file (BMP) that had been modified via least siginficant bit (LSB) steganography and the following details. The BMP was modified to carry a message starting at the 101st byte and only in every 3rd byte thereafter. The challenge was therefore to recover the message and paste it as the answer for glory and prizes (not really, but pride points count). What was cool about this CTF is that while a number of my associates participated not one of us approached the challenge the same way. One used Excel with VB, another used AutoIT, and yet another wrote his own C#. Since I'm not as smart as any of these guys, I opted to trust the force and use our good and faithful servants sed and awk on my SIFT 3.0 VM along with a couple of my preferred editors (010 and TextPad) on my Windows host. I know, I know, "WTF, Russ, just do it on one system." I can say only that I am fixed in my ways and like to do certain things with certain tools, so I'm actually faster bouncing back and forth between systems. Here's what I did in seven short steps, with some details and screenshots. Note: I share this because it worked and I enjoyed it, not because I'm saying it's an optimal or elegant method.

1) I opened the .bmp in 010 Editor and first deleted bytes 1 through 100 given that the message starts at the 101st byte. Remember, if you choose to do this by offset the first byte is offset 0 and the 101st is the 100th offset. This critical point will be pounded (literally) into your head by Mike Poor when taking the GCIA track, which I can't recommend enough. Then under View chose Edit As and switched from Hex to Binary (remember we're working with the least significant bit). I then selected all binary, chose Copy As, and selected Copy As Binary Text which I saved as challenge13binaryRaw.txt.

2) I opened challenge13binaryRaw.txt in TextPad because I love its replace functionality. The binary text output from 010 Editor is separated by a space every 8 bits/1 byte. In TextPad I used a regular expression replacement to convert the text to a single column (replaced every space with a newline \n), which I saved as challenge13binaryRaw-column.txt.

3) I then used sed on challenge13binaryRaw-column.txt to print only every third byte, described in the challenge description as those containing the message, and saved it to every3rd.txt as follows: sed -n '1~3p' challenge13binaryRaw-column.txt > every3rd.txt. In this syntax, sed simply starts at the 1st line then prints every 3rd ('1~3p').

4) To then grab the least significant bit from each line of every3rd.txt I used awk as follows: awk '{print substr($0,8)}' every3rd.txt > lsb.txt. This tells awk to grab the 8th character of each line and print it out to lsb.txt, the 8th character representing the least significant bit in each 8 bit byte.

5) lsb.txt now contains only the message but I need to format it back into machine readable binary for translation to human readable text. Back to TextPad where I used another regex replacement to convert a long column of single bits back to one line and save it as lsb-oneline.txt. Replacing a carriage return (\r) with nothing will do exactly that.

6) In order for machine translation to successfully read the newly compiled message traffic, we now need to reintroduce a space between every 8 bits/ 1 byte which we can again accomplish with sed and save it to finalBinary.txt as follows: sed 's/\(.\{8\}\)/\1 /g' lsb-oneline.txt > finalBinary.txt

7) I then copied the content from finalBinary.txt into a binary translator and out popped the message.

It was actually the same short message looped many times through the BMP but I went for overkill extracting it not knowing the parameters other than those defined by the challenge description (no mention of how long the message was). A bit clunky to be sure but for you forensicators looking for ways to pull out messages or content embedded via LSB steganography, this approach might be useful. And no, I'm not telling you what the message was or sharing the BMP file in case the CTF administrators wish to use it again. :-) You'll want to brush up on your regex; one of my favorite resources is here.

Cheers and enjoy.

Russ McRee | @holisticinfosec

 

 

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

Apple Update for CVE 2014-1347, (Sat, May 17th)

Sat, 05/17/2014 - 07:24

Apple has released an update to address CVE 2014-1347 (1) for iTunes which addresses a specific vulnerability in the permissions of files and folders of the system.  This vulnerability address a sitution, where "upon each reboot, the permissions for the /Users and /Users/Shared directories would be set to world-writable, allowing modification of these directories. This issue was addressed with improved permission handling". 

As always, please ensure that all changes are tested and deployed in compliance with enterprise change management standards :)

(1)http://support.apple.com/kb/TS5434

tony d0t carothers --gmail

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

Punking Pet Peeves with PowerShell, (Fri, May 16th)

Fri, 05/16/2014 - 14:10

Yesterday, Rob discussed Collecting Workstation / Software Inventory Several Ways, including PowerShell. I don't spend nearly as much time as I used to going hands-on with systems, but everytime I need to solve a problem on Windows hosts, PowerShell is there for me. Sadly my PowerShell fu is weak as compared to where I'd like it to be, but as an assimilated minion (1 of 7) of the Redmond Empire I have the benefit of many resources. Luckily much content is publicly available and you have Lee Holmes to help you with PowerShell mastery. Lee really is the man on the PowerShell front, you'll note his Windows PowerShell Pocket Reference in it's rightful place on my desk.

Amongst my many pet peeves are overly permissive file shares with the likes of Everyone, Domain Users, Domain Computers, and Authenticated Users granted unfettered access. No one every leaves PII, config files, and user name password lists on a share, right? And no one with unauthorized or inappropriate access ever makes their way on to enterprise networks, right? Sure, Russ, sure. :-) Back in the real world, where would we be without an entire industry sector dedicated to DLP (data leak prevention solutions)? Oh yeah, probably in a world with less SPAM and cold calls, but I digress.

Step 1: We admitted we were powerless over misconfiguration—that our networks had become unmanageable.

Step 2: Came to believe that PowerShell could restore us to sanity.

I have fallen deeply, unmanageably, irrevocably in love with the Revoke-SmbShareAccess cmdlet available on Windows Server 2012 R2 and Windows 8.1 systems (Windows PowerShell 4.0). Having tried to solve this issue with the likes of Set-Acl and requiring serious counseling thereafter, Revoke-SmbShareAccess (and it's friends Block, Unblock, Get, and Grant) allowed me to do in three lines what could not be otherwise done easily or elegantly. 

"The Revoke-SmbShareAccess cmdlet removes all of the allow access control entries (ACEs) for a trustee from the security descriptor of the Server Message Block (SMB) share." Sweet!

Examples? You bet. The terms share and server are used generically here; you'll need to apply the appropriate nomenclature.

Local (single share, single account):
Revoke-SmbShareAccess -Name share -AccountName "Everyone" -force

Local (single share, multiple accounts):
Revoke-SmbShareAccess -Name share -AccountName "Everyone","Domain Users","Domain Computers","Authenticated Users" -force

Remote (single share, single account):
Revoke-SmbShareAccess -name share -CimSession server -AccountName Everyone -Force

Remote (single share, multiple accounts):
Revoke-SmbShareAccess -name share -CimSession server -AccountName "Everyone","Authenticated Users","Domain Users","Domain Computers" -Force

For Remote (multiple share, multiple servers, multiple accounts), where you want to use a list of servers and/or shares you can build a small script and define variables that pull from text lists.

$servers = Get-Content -Path C:\powershell\data\servers.txt
$shares = Get-Content -Path C:\powershell\data\shares.txt
Revoke-SmbShareAccess -name $shares -CimSession $servers -AccountName "Everyone","Authenticated Users","Domain Users","Domain Computers" -Force

Obviously, you'll want to tune, experiment, and optimize but hopefully this may help get you started on the cleanup process. You'll want to overly communicate with your user base advising them to create security groups granting share access only to those people (and systems) embedded in the appropriate group. Don't just go removing these permissions without an awareness campaign. You don't want that call: "You broke my entire service when you removed Everyone share permissions!" Argh. Remember also the nuances (they are many) between share permissions and NTFS permissions.

Good luck and cheers!

Russ McRee | @holisticinfosec

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

ISC StormCast for Friday, May 16th 2014 http://isc.sans.edu/podcastdetail.html?id=3981, (Fri, May 16th)

Thu, 05/15/2014 - 20:38
(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

APPLE-SA-2014-05-15-2 iTunes 11.2 available for download - security fixes address CVE-2014-1296: http://support.apple.com/kb/HT1222 & http://support.apple.com/kb/HT6245, (Thu, May 15th)

Thu, 05/15/2014 - 20:37

=============== Rob VandenBrink Metafore

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

APPLE-SA-2014-05-15-1 addresses multiple security issues, updates OS X Mavericks v10.9.3 - more info here: http://support.apple.com/kb/HT6207, (Thu, May 15th)

Thu, 05/15/2014 - 14:33

=============== Rob VandenBrink Metafore

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

Collecting Workstation / Software Inventory Several Ways, (Thu, May 15th)

Thu, 05/15/2014 - 14:12

One of the "prepare for a zero day" steps that I highlighted in my story last week was to inventory your network stations, and know what's running on them.  In short, the first 2 points in the SANS 20 Critical Security Controls.  This can mean lots of things depending on your point of view.

Nmap can make an educated guess on the existence of hosts, their OS and active services:
nmap -p0-65535 -O -sV x.x.x.0/24
Good information, but not "take it to the bank" accuracy.  It'll also take a LONG time to run, you might want to trim down the number of ports being evaluated (or not).  Even if you don't take this info as gospel, it's still good supplemental info, for stations that are not in your domain.  You can kick this up a notch with Nessus (Nessus will also login to stations and enumerate software if you have credentials)

If you're running active directory, you can get a list of hosts using netdom, and a list of apps on each host using WMIC:
netdom.exe query /domain:domainname.com workstation | find /v "List of Workstations" >stations.out

(if you use "server" instead of "workstation", you'll get the server list instead)

and for each station:
wmic product list brief

But having run exactly this recently, this can take a LONG time in a larger domain.  How can we speed this up?  In a word, Powershell.
To inventory a domain:
import-module ActiveDirectory
Get-ADComputer -Filter * -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack,OperatingSystemVersion


To inventory the software on a remote workstation:
Get-WmiObject -Class Win32_Product -computername stationnamegoeshere | Select-Object -Property Name

( see here for more info: http://technet.microsoft.com/en-us/library/ee176860.aspx)

I collected this information first using the netdom/wmic way (hours), then using powershell (minutes).  Guess which way I'd recommend?

OK, now we've got what can easily be Megabytes of text.  How do we find out who needs some TLC?  Who's running old or unpatched software?

As an example - who has or does NOT have EMET 4.1 installed?

To check this with WMIC:

"go.cmd" (for some reason all my parent scripts are called "go")  might look like:
@echo off
for /f %%G in (stations.out) do call emetchk.cmd

and emetchk.cmd might look like:
@echo off
echo %1  >> inventory.txt
wmic /node:%1 product where "name like 'EMET%%'" get name, identifyingnumber, InstallDate >> inventory.txt
echo.


Or with powershell, the domain enumeration would look like:
import-module ActiveDirectory
Get-ADComputer -Filter * -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack,OperatingSystemVersion  > stations.out

Then, to enumerate the actual applications (for each station in stations.out), you could either use the emetchk.cmd script above, or re-do the thing in powershell (I haven't gotten that far yet, but if any of our readers want to add a script in the comments, I'm sure folks would love to see it!) - in this example the

Get-WmiObject -Class Win32_Product -computername stationname | Select-Object -Property Name > stationname.txt

Done!

If you run this periodically, you can "diff" the results between runs to see what's changed.  Diff is standard in linux, is part of Windows these days also if you install the SFU (services for unix), or you can get a nice diff report in powershell with :

Compare-Object -ReferenceObject (Get-Content c:\path\file01.txt) -DifferenceObject (Get-Content c:\path\file02.txt)

But what about the stations who aren't in our corporate domain?  Even if your domain inventory is solid, you still must sniff network traffic using tools like PVS (from Tenable) or P0F (open source, from http://lcamtuf.coredump.cx/p0f3/) to identify folks who are running old versions of java, flash, air, IE, Firefox (pre-auto update versions mostly) and so on, that aren't in your domain so might get missed in your "traditional" data collection.  Normally these sniffer stations monitor traffic in and out of choke points in the network like firewalls or routers.  We covered this earlier this year here: https://isc.sans.edu/diary.html?date=2013-12-19

I hope this outlines free or close to free solutions to get these tasks done for you.  If you've found other (or better) ways to collect this info without a large cash outlay and/or a multi-week project, please share using our comment form.

===============
Rob VandenBrink
Metafore

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

Breaches and Attacks that are "Not in Scope", (Thu, May 15th)

Thu, 05/15/2014 - 11:05

Last week, we saw Orange (a Telecom company based in France) compromised, with the info for 1.3 million clients breach.  At this time, it does not appear that any credit card numbers or credentials were exposed in that event.(http://www.reuters.com/article/2014/05/07/france-telecomunications-idUSL6N0NT2I120140507)

The interesting thing about this data breach was that it involved systems that would not be considered "primary" - the site compromised housed contact information for customers who had "opted in" to receive sales and marketing information.

I'm seeing this as a disturbing trend.  During security assessments, penetration tests and especially in PCI audits, I see organizations narrow the scope to systems that they deem as "important".   But guess what, the data being protected has sprawled into other departments, and is now housed on other servers, in other security zones where it should not be, and in some cases is in spreadsheets on laptops or tablets, often unencrypted.  Backups images and backup servers are other components that are often not as well protected as the primary data (don't ask me why this oversight is so so common)

The common quote amongst penetration testers and other security professions for this situation is "guess what, the internet (and the real attackers) have not read or signed your scope document"

It's easy to say that we need to be better stewards of our customer's information, but really we do.  Organisations need to characterise the "what does our information look like" (with regex's, or dummy customer records that you can search for), then go actively hunt for it.  Be your own Google - write scripts to crawl your own servers and workstations looking for this information.  Once this process is in place, it's easy to run this periodically, or better yet, continuously.  Put this info into your SNORT (or other IPS) signatures so you can see them on the wire, in emails, file/copy or file/save operations.

Too often the breach that happens is on a system that's out of scope and much less protected than our "crown jewels" data deserves.  If you're in the process of establishing a scope for PCI or some other regulatory framework, stop and ask yourself "wouldn't it be a good idea to put these controls on the rest of the network too?"

===============
Rob VandenBrink
Metafore

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

ISC StormCast for Thursday, May 15th 2014 http://isc.sans.edu/podcastdetail.html?id=3979, (Wed, May 14th)

Wed, 05/14/2014 - 15:27
(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

May OUCH Newsletter: I'm Hacked, Now What? http://www.securingthehuman.org/resources/newsletters/ouch/2014#may2014, (Wed, May 7th)

Wed, 05/07/2014 - 12:01
(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

New DNS Spoofing Technique: Why we haven't covered it., (Wed, May 7th)

Wed, 05/07/2014 - 06:15

The last couple of days, a lot of readers sent us links to articles proclaiming yet another new flaw in DNS. "Critical Vulnerability in BIND Software Puts DNS Protocol Security At Risk" [1] claimed one article, going forward to state: "The students have found a way to compel DNS servers to connect with a specific server controlled by the attacker that could respond with a false IP address. “

So how bad is this really?

First of all, here is a the "TL;DR;" version of the vulnerability:

A domain usually uses several authoritative DNS servers. A recursive DNS server resolving a domain will pick a "random" authoritative DNS server for this particular domain. The real question is: How random? Actually as it turns out, it isn't random at all, and this is a features. BIND attempts to use the fastest name server, and has a special algorithm ( Smoothed Round Trip Time or SSRT algorithm) to figure out which server to use. 

The vulnerability found here allows an attacker to influence the SSRT values in order to direct the name server to use a specific authoritative name server for a domain.

So the result is that the attacker can determine which authoritative name server is being used. BUT it has to be among the set of valid authoritative name servers. The attacker can not redirect the queries to an arbitrary name server of the attackers choosing.

So how does this make DNS spoofing easier?

The attacker has to guess three variables in order to spoof a DNS response:

  1. the query id (1/65535)
  2. the source port (theoretically 1/65535, but in most implementations more like 1/5000).
  3. the name server IP (average 1/4) 

By pinning the name server IP, the attacker will only gain a marginal advantage. The issue may be more of a problem if one of the servers is compromised. But in this case, DNS spoofing isn't really your #1 priority.

Without DNSSEC, DNS spoofing is certainly possible, and this attacks makes it a bit more likely. But this attack is hardly a game changer and only provides a minor advantage to the attacker. 

What should you do?

Relax... finish your coffee... read up on DNSSEC and apply BIND patches as they become available (because it is always good to patch.)

Also the original presentation/paper is available as well and a lot better then some of the news reports covering it.

How hard is it to implement DNSSEC? It isn't trivial, but more recent versions of BIND make it a lot easier by automating some of the re-signing tasks. It is easiest if your registrar supports it and you host your zones with them. For example the registrar I host a couple of my domains with automates the entire process for about $5/year. 

[1] http://thehackernews.com/2014/05/critical-vulnerability-in-bind-software.html
[2] https://www.usenix.org/conference/woot13/workshop-program/presentation/hay

We also had another recent article covering some new DNS spoofing techniques:

New tricks that may bring DNS spoofing back or: "Why you should enable DNSSEC even if it is a pain to do"

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts

ISC StormCast for Wednesday, May 7th 2014 http://isc.sans.edu/podcastdetail.html?id=3967, (Wed, May 7th)

Tue, 05/06/2014 - 16:36
(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Alerts