The Flame malware attacks continue to generate some interesting reactions on the web. One the one hand, they seem to be all over the place, and, yet, I am having a hard time disagreeing with most of them. Probably a good sign that we still may not know enough, or what we know hasn’t been analyzed enough to gain much consensus. Or simply the fact that, as they say in management, the 10,000 foot view can be very different than the view when you’re on the ground.
In Parmy Olson’s Disruptors blog for Forbes.com, talks about 3 takeaways about Flame; one of them being that what is happening now between governments will likely be an indicator of what corporate espionage will look like soon (if not already), and another being that some adversaries of nations will be forced to go low tech (a la bin Laden, a tactic we have seen repeatedly already in cases of terrorism and asymmetric warfare).
Meanwhile, Johannes Ullrich posted a diary entry at the ISC (Internet Storm Center) on Flame is almost derisive toward Flame and the attention it is receiving. His analysis of the toolset is that it is fairly clumsy compared to some malware tools available. It seems a lot of network administrators are asking how to detect whether they have Flame infections, and perhaps this is what sparks the author’s rant. He has a good point we should all keep in mind: focusing on a single, obscure threat is no way to design a network defense strategy. Granted, I can understand that these admins are probably going to be asked by an executive about Flame, because it is receiving enough media attention to cross into general awareness.
Last, but certainly not least, there is a longer article on Wired by Mikko Hypponen, the Chief Research Officer at information security company F-Secure, titled “Why Antivirus Companies Like Mine Failed to Catch Flame and Stuxnet“.
I’m still mulling all this over, but I’m planning to come up with an opinion piece down the line.
Just when I thought FLAME would push Stuxnet out of the spotlight, the New York Times is reporting that the Stuxnet was a joint venture between the US and Israel to attack the Iranian nuclear program. I’m not surprised about the parties involved, it was speculated that this was the case from early on; but I am surprised that they were able to get anything official on it. This information is apparently being adapted from a book by David E. Sanger, called Confront and Conceal being published next week. It will be interesting to see if the revelation withstands scrutiny, or whether this is a marketing ploy to bump book sales.
I’ve been occupied elsewhere, so I’m just coming up to speed on the latest computer malware (should we be considering these things to be super malware? ), Flame. Wired’s Threat Level has a great article on it, written by Kim Zetter, for catching up on it. Dark Reading has some additional perspective on how this malware has existed undetected for possibly several years in an article by Kelly Jackson Higgins.
Flame seems to be a very robust piece of software that uses a broad set of tools to conduct its mischief and mayhem; its distribution seems to be very targeted, and there are indications that it may be another piece of “state sponsored” code. I keep wondering if you really need a state to sponsor such projects, or if any sufficiently organized and motivated group with the right talent and resources could do something similar? Is it really more a difference of approach? Consider the difference between phishing and spear phishing.
All that is scary enough, but the one quote that sends shivers down my spine is this one from Zetter’s article:
The researchers say they don’t know yet how an initial infection of Flame occurs on a machine before it starts spreading. The malware has the ability to infect a fully patched Windows 7 computer, which suggests that there may be a zero-day exploit in the code that the researchers have not yet found.
This NPR article about CEO’s receiving cyber briefings from the US military bothers me. As presented, it sounds like an attempt at education through fearmongering. Scared Straight for businesses? I’m sure the appeal of a one day Top Secret clearance is too great for many executives to pass up. In this day and age, a CEO should not be that surprised. A corporation’s primary goal is to contribute to shareholder value; information security should be considered complimentary to that goal, a cost of doing business. And to expect the government to do it all is unrealistic. I don’t think businesses expect the government to pay for their locks and burglar alarms for their physical security (do they?). Honestly, I am more interested in the reactions of the CEO’s than the top secret information given to them; I’d like to know if I’m doing business with companies that just don’t get it.
I saw an article in The Atlantic reporting that cybercrime reports contain staggering amounts of upward bias. More coverage at CNET here and the New York Times here (firstborn may be required, but this is probably best content of the lot). Although the methods used to come to this conclusion involve statistical analysis, I think this is a major problem in the field of information security, and it certainly isn’t unique in that field. While the stated purpose of the defenders of the network is to, well, defend the network, there is also the secondary purpose of justifying their own existence, and often securing scarce resources. And let’s face it, cybersecurity incidents can be stealthy, especially when data loss is the primary outcome. But when the value of cybercrime is estimated to be one trillion dollars, many times greater in value than the drug market, it really doesn’t seem to pass the sniff test.
I’ve got a business card-sized cheat sheet from a Carnegie Mellon CERT course I took several years back; it’s the CERT Coordination Center’s Elements of a Code of Conduct. Sandwiched in amongst a lot of good advice, are some gems like “state the facts”, “be truthful”, and “avoid shock tactics.” Good advice, all. Credibility is the currency of the defender of the network; we should spend it wisely.
Came across this article from CERT in the course of my day job; if you think about securing systems at all it’s worth a look, if only for the instant classic photo (worth a 1,000 words, at least!) they have on the page. Check it out when you have a chance; I don’t want to ruin the surprise.
First saw this on Computerworld, but the Verizon 2011 Cyberattack Report is out. One of the big takeaways is that they estimate 97% of the attacks were avoidable without the need for “difficult or expensive countermeasures”. This seems completely plausible to me, especially since the human element is such a large and vulnerable component of an information security strategy, and because it seems that it is often easier for organizations to throw money at a problem and expect it to go away then to spend the time to really analyze the situation and monitor it on a recurring basis. But information security (much like EM) is a process, not a product.