I’ve been occupied elsewhere, so I’m just coming up to speed on the latest computer malware (should we be considering these things to be super malware? ), Flame. Wired’s Threat Level has a great article on it, written by Kim Zetter, for catching up on it. Dark Reading has some additional perspective on how this malware has existed undetected for possibly several years in an article by Kelly Jackson Higgins.
Flame seems to be a very robust piece of software that uses a broad set of tools to conduct its mischief and mayhem; its distribution seems to be very targeted, and there are indications that it may be another piece of “state sponsored” code. I keep wondering if you really need a state to sponsor such projects, or if any sufficiently organized and motivated group with the right talent and resources could do something similar? Is it really more a difference of approach? Consider the difference between phishing and spear phishing.
All that is scary enough, but the one quote that sends shivers down my spine is this one from Zetter’s article:
The researchers say they don’t know yet how an initial infection of Flame occurs on a machine before it starts spreading. The malware has the ability to infect a fully patched Windows 7 computer, which suggests that there may be a zero-day exploit in the code that the researchers have not yet found.
This NPR article about CEO’s receiving cyber briefings from the US military bothers me. As presented, it sounds like an attempt at education through fearmongering. Scared Straight for businesses? I’m sure the appeal of a one day Top Secret clearance is too great for many executives to pass up. In this day and age, a CEO should not be that surprised. A corporation’s primary goal is to contribute to shareholder value; information security should be considered complimentary to that goal, a cost of doing business. And to expect the government to do it all is unrealistic. I don’t think businesses expect the government to pay for their locks and burglar alarms for their physical security (do they?). Honestly, I am more interested in the reactions of the CEO’s than the top secret information given to them; I’d like to know if I’m doing business with companies that just don’t get it.
I saw an article in The Atlantic reporting that cybercrime reports contain staggering amounts of upward bias. More coverage at CNET here and the New York Times here (firstborn may be required, but this is probably best content of the lot). Although the methods used to come to this conclusion involve statistical analysis, I think this is a major problem in the field of information security, and it certainly isn’t unique in that field. While the stated purpose of the defenders of the network is to, well, defend the network, there is also the secondary purpose of justifying their own existence, and often securing scarce resources. And let’s face it, cybersecurity incidents can be stealthy, especially when data loss is the primary outcome. But when the value of cybercrime is estimated to be one trillion dollars, many times greater in value than the drug market, it really doesn’t seem to pass the sniff test.
I’ve got a business card-sized cheat sheet from a Carnegie Mellon CERT course I took several years back; it’s the CERT Coordination Center’s Elements of a Code of Conduct. Sandwiched in amongst a lot of good advice, are some gems like “state the facts”, “be truthful”, and “avoid shock tactics.” Good advice, all. Credibility is the currency of the defender of the network; we should spend it wisely.
Came across this article from CERT in the course of my day job; if you think about securing systems at all it’s worth a look, if only for the instant classic photo (worth a 1,000 words, at least!) they have on the page. Check it out when you have a chance; I don’t want to ruin the surprise.
First saw this on Computerworld, but the Verizon 2011 Cyberattack Report is out. One of the big takeaways is that they estimate 97% of the attacks were avoidable without the need for “difficult or expensive countermeasures”. This seems completely plausible to me, especially since the human element is such a large and vulnerable component of an information security strategy, and because it seems that it is often easier for organizations to throw money at a problem and expect it to go away then to spend the time to really analyze the situation and monitor it on a recurring basis. But information security (much like EM) is a process, not a product.
In the EM class I’m taking, we’ve talked about agenda building and policy in relation to emergency management. A natural but unfortunate part of the process is that as the public’s focus turns elsewhere, programs begin to decline. In emergency management, lack of a particular type of incident tends to undermine focus. In difficult economic times, that decay manifests even quicker. Cases in point:
All these points have me thinking about the problem from a different angle, and I hope to discuss it further here in the near future.
The 911 system of the future has some challenges ahead of it, in a similar fashion to the challenges of the Emergency Broadcasting System trying to consolidate into CMAS, which I previously discussed. Rather than trying to disseminate information, 911 is trying to collect intelligence to efficiently dispatch resources. In our digitally connected world, with fairly ubiquitous technologies like email, texting and internet telephony (VoIP, or Voice of Internet Protocol), the POTS (Plain Old Telephone System) 911 is really beginning to show its age.
But while all these technologies are proven communications tools, integrating them into the 911 call center process could be tricky. A few good points brought up in Mark Fletcher’s blog entry here is that texting to 911 is probably not going to be an option. For one thing, how do you know where to send a text based on where you are? And as the operator, how do you interpret the 160 character message (which doesn’t seem to have the ability to send geolocation information)? I can’t see texting as being a very efficient method of communication under these situations anyway, since it’s really a series of awkward one-way messages, and it seems like more of a technology of last resort.
Additionally, Fletcher mentions that the current U.S. 911 technology for the hearing impaired, the TTY/TDD system, does not always work well, yet that is a fairly mature technology compared to anything that may be in the pipeline for texting, and is already required by law. It’s one of many reasons why dispatching police cars to the origins of a silent call can be a good policy. But we already live in a world where we can receive texts from numerous sources that can’t be easily validated; how will we generate similar policies for texting? In my experience, complexity is rarely welcome in a system that needs to be failsafe and foolproof, and in an environment with limited resources, it may become necessary to focus on a few solutions, make them as bulletproof as possible, and then communicate the hell out those solutions so people will use them properly when the time comes.