Securx vol 1 issue 1

Page 1

VISHING

Heartbleed

After The Hack

HeadsUp for WhatsApp

Identity Marketing

Is A Bigger Budget Adequate?

Enabling Success via Collaboration



I M P R I N T EXECUTIVE EDITOR Mr. L S SUbramanain Founder - Cloud Alliance India

E D I T O R & P U B L I S H E R* Mr.

David Chelekat

EDITORIAL BOARD

Mr. Sudeep Bhandopadhyay MD- Destimoney Mr. Amol Vidwans Group CIO, Balaji Infrastructure Group Mr. Sunil Rawlani Partner, Metamorphosys, Ex CIO, HDFC Life, Mr. Thomson Thomas CIO, HDFC Life Insurance Mr. Sameer Ratolikar Sr. VP and CISO, HDFC Bank Mr. Milind

Tandel

Head - IT Procurement, Barclays Bank

ADVISORYBOARD Mr. Dewang

Neralla Mr. L S S Subramaniam Mr. Alok Tiwari Mr. Chetan Dighe Mr. Vishal

Salvi

MANAGEMENT Transact360

MARKETING & SALES Footprint Media

RESEARCH Antz Consulting

ENGAGE i3rGlobal Pegasus

EDITORIAL Transact Media

DESIGN & LAYOUT T360Design

CIRCULATION 1SourceMedia

REGISTERED ADDRESS 5/4, Assisi Nagar, P L Lokhande Marg, Chembur, Mumbai 400 043

MARKETING & MAILIN G 5.4 Assisi Nagar, P L Lokhande Marg, Chembur (W), Mumbai. 400 043 This magazine is an aggregation of knowledge articles to readers industry make better decisions. The magazine is printed at Ramabhai Press, Vashi, New Mumbai and published by David Chelekat for Transact360. All views are of the authors and solicited advertisers. Readers are requested to make appropriate inquires before buying into products and services. Transact Media/Transact 360 is not responsible for steps take by readers based on claims/performance of products as made by any advertiser or writers. Readers are requested to qualify all claims independently.

S

ecurity has become a critical success factor for businesses in India. Increasingly businesses are discovering that security is a game wherein the rules are ever changing and the opponents ever ready to outsmart.

The risks are on the rise The uncertainty in the global as well as the Indian economy in the past few years has made information security an increasingly challenging game whose outcome can have potentially serious consequences for your business. In today’s rapidly evolving threat landscape, businesses run the risk of falling behind, their defences weakening and security practices dulling. At the same time, their adversaries are becoming ever more sophisticated, breaching the defences of business ecosystems and leaving reputational, financial, and competitive damage in their wake. Respondents to The State of Industry - Security Edition Survey – India, 2015 seem to be playing an entirely different game. Of the 438 executives across 12 industries who responded to the survey, most expressed a high level of confidence in their organisations’ information security practices. Indeed, many believe they are winning. Strategies are deemed to be sound. Nearly half (45%) the respondents see their organisation as a “front-runner” in terms of information security strategy and execution. The odds, however, are not in their favor. Too often, and far too many organisations are reporting degradation in security programs. Risks are neither well understood nor addressed. The number of security incidents is on the rise. Senior executives are frequently seen as part of the problem rather than keys to the solution. These are posing significant challenges to the information security environment in organisations. Strategy and culture only pay off if execution is strong. Companies in India are also slightly more confident of the security environment of their suppliers or partners compared to global peers. But is it bravado, lack of information or a fact. Given today’s elevated threat environment, businesses can no longer afford to play a game of chance. They must prepare to play a new game, one that requires advanced levels of skill and strategy to win. Global executives have been meeting at various events to discuss current and emerging security threats. We bring you the best of these discussions for you to evaluate their criticality to the India business environment, Meet and discuss with CXOs at the Action Leaders Meet - Security Edition 2015, Mumbai. A detailed report will be published in the inaugural issue of SecureX. Happy reading...


- A Weak Security Chain Link


S

ecurity shortcomings of third-party vendors are a cyber criminal’s dream. So security pros should revisit how they manage vendor relationships. Credit card data of 40 million Target customers, 15,000 Boston Medical Center patients’ personal information, and payment card information of 868,000 Goodwill customers - was exposed as a result of data breaches not at the companies end, but at vendors with access to companies’ systems. A recent study found that one third of U.S. retailers that experienced a data breach within the past year were compromised via third-party vendors. And it’s not a new problem. It has traditionally been much easier to go through third-party vendors, especially where there are smaller firms and contractors, than trying to attack large secure organization. Consider the services industry – insurers, law firms, consultancies – these have been vectors for infiltration into much larger client companies. No company is immune. A smaller company, perhaps one that offers janitorial, concierge, gardening, provisioning or other maintenance services, might not have been considered good vectors five or six years ago, but today any will do, because increasingly even these small companies make use of connected devices, a small IT system, a web presence. In many cases, that’s all a hacker needs to get into your systems. It’s crucial to consider which data and business systems your vendors can access. In some cases these are explicitly shared, such as a marketing relationship that involves shared customer data, in other cases, they’re implicit, such as an IT admin who performs database maintenance on a server storing sensitive data. Implicit access can be harder to understand, and therefore harder to manage. Regardless of their area of expertise, any vendor inevitably brings a huge set of unknowns into a relationship. While a vendor may do a great job providing a specific service, they may unnecessarily expose themselves to risk in other ways, and that risk may transfer to you as a customer. For example, a point-of-sale vendor may do a great job managing those systems, but run a vulnerable web application for their own corporate web site. Access Control, Authentication and Monitoring Remote access is one of the most critical aspects to consider in any third-party relationship. It is common among third parties that perform

infrastructure management, and because these third parties often have privileged access, this can represent a significant vulnerability to an enterprise environment if not properly protected. The best methods of protection lie in well-defined access control, strong authentication and monitoring. In the authentication space, we are seeing security technologies evolve to increase their adaptive capabilities through contextual awareness such as an understanding of where, what,

directly to your network. While service providers rarely offer much visibility into what’s happening within their environments, you must request that your data be hosted on a dedicated group of systems, with only senior staff allowed access to manage those systems, and that an activity log be provided for any access to the environment. The best time to add these requirements is during the initial sales process or a subsequent renewal.

Enforcing security policies and access controls for employees, external IT service providers, and vendors can be challenging. When individuals have privileged access to systems on your network, you need to ensure those accounts are managed securly, and auditable. You can give third-party IT vendors secure remote access to all of the systems and devices on your network. You must addresses many crucial IT security requirements of industry regulations, such as PCI DSS and HIPAA. Granular permissions, SSL encryption, and detailed audit reports guard every support session. 1) Create policy templates to manage permissions 2) Define which systems may be accessed and when 3) Enable internal and external IT to support your sytems 4) Track and report on privileged accounts

Yes to Testing, No to Trust

who and why aspects of the users. For example, geolocation can trigger additional vetting of the user, such as asking for details about the user. It is advisable to require that any vendor with access to your environment use two-factor authentication, and to ensure that vendors only have access to the resources they need to do their jobs. Many vendors are small businesses that are typically ill-equipped to respond to threats and may be lacking in common security precautions like the use of two-factor authentication and full-disk encryption. Network segmentation and event monitoring can help reduce the risk of a breach caused by third-party access. It is advisable to provide dedicated vendor systems to work from, and to prevent vendors from connecting

Service providers should also be able to demonstrate that they’ve had regular penetration tests performed by qualified security firms, and they should be able to show you the executive summary from their most recent penetration test. From your end, consider conducting your own regular security assessment of their service, or include them in the scope of your next security assessment or penetration test.

Ultimately, trust no one!

The phrase ‘I trust my team’ doesn’t cut it with distributed computing, cloud providers, and third party partners that introduce greater points of access into your cloud environment. If you are running sensitive applications or data in the cloud, encrypt them. If you have vendors accessing your data center for any reason, make sure you can control, monitor and audit what they are doing. When evaluating any new vendor, make a basic security survey a key part of that process. Ask questions about the organization’s access control policies, physical security measures and password procedures. Is their software up to date and patched? Ask if they have ever been involved in a security breach. Having some documentation will at least give you some legal standing in the event that you are breached. Carefully auditing a vendor’s security before hiring them, is the best way to reduce your risk of a third-party breach. Red flags such as lack of security awareness or poor software development practices can often be inferred through a brief questionnaire during the sales process. Make expectations regarding security as clear as possible to any new vendor before starting working together. Consider the same policies you apply to your own organization around data classification and protection, identity and authentication, and best practice security processes. Ask as to why wouldn’t these same requirements apply to third party vendors? Jeff Goldman is a freelance journalist based in Los Angeles. He can be reached at jeff@jeffgoldman.com.


Heartbleed wasn’t fun. It represents us moving from ‘attacks could happen’ to ‘attacks have happened’, and that’s not necessarily a good thing. The larger takeaway actually isn’t ‘This wouldn’t have happened if we didn’t add Ping’, the takeaway is ‘We can’t even add Ping, how the heck are we going to fix everything else?’. The answer is that we need to take Matthew Green’s advice, start getting serious about figuring out what software has become Critical Infrastructure to the global economy, and dedicating genuine resources to supporting that code.

It took three years to find Heartbleed. We have to move towards a model of ‘No More Accidental Finds’. You know, I’d hoped I’d be able to avoid a long form write up on Heartbleed. Such things are not meant to be. I am going to leave many of the gory technical details to others, but there’s a few angles that haven’t been addressed and really need to be. So, let’s talk. What to make of all this noise? First off, there’s been a subtle shift in the risk calculus around security vulnerabilities. Before, we used to say: ‘A flaw has been discovered. Fix it, before it’s too late.’ In the case of Heartbleed, the presumption is that it’s already too late, that all information that could be extracted, has been extracted, and that pretty much everyone needs to execute emergency remediation procedures. It’s a significant change, to assume the worst has already occurred. It always seems like a good idea in security to emphasize prudence over accuracy, possible risk over evidence of actual attack. And frankly this policy has been run by the privacy community for some time now. Is this a positive shift? It certainly allows an answer to the question for your average consumer, ‘What am I supposed to do in response to this Internet ending bug?’ Well, presume all your passwords leaked and change them! I worry, and not merely because ‘you can’t be too careful’ has not at all been an entirely pleasant policy in the real world. We have lots of bugs in software. Shall we presume every browser flaw not only needs to be patched, but has already been exploited globally worldwide, and you should wipe your machine any time one is discovered? This OpenSSL flaw is pernicious. We have had big flaws before, ones that did not just provide read access to remote memory either. Why are we faced with a freak out here? Because we expected better, here, of all places. There’s been quite a bit of talk, about how we never should have been exposed to Heartbleed at all, because TLS heartbeats aren’t all that important a feature anyway. The security community is complaining about Pinging again. This is, of course, pretty rich... given that it seems that half of us just spent the last few days pinging the entire Internet to see who’s still exposed to this particular flaw. We in security sort of have blinders on, in that if the feature isn’t immediately and obviously useful to us, we don’t see the point. In general, you don’t want to see a protocol designed by the security community. It won’t do much. In return (with the notable and very appreciated exception of Dan Bernstein), the security community doesn’t want to design you a protocol. It’s pretty miserable work.

Savagejen It turns out the code wasn’t authored at 11pm GMT on new years eve, that’s just when the commit was accepted into the core. Who knows if that means code review happened just prior or not. Daniel Lange The problem is: nobody is willing to pay for this, Dan. There is not sufficient revenue to be made in preventing these bugs and there is no economic disadvantage in keeping the current level of risk. We’d need to have companies fined severely for every lost customer data record. Or for exploits that could have been prevented (“state of the art”-type legislation). Once companies have a risk of $50 per lost record to account against, there is suffcient advantage in paying for security audits instead. And due to the beauty of most relevant code being open source, the society as a whole could benefit. Mark Overholser: Let’s say, for the sake of argument, that companies did pay for code audits and discover flaws like this in the open-source code they were using. Under the model you’re describing, there would definitely be an incentive for them to fix the flaw for themselves. However, I don’t believe there would be a strong enough incentive for them to push their fix (the revelation of and patching of which cost them money) upstream into the public code base. Yes, there is a long-term cost involved in maintaining the patch in-house and having to re-validate and re-apply the patch as newer releases of the public code come out, and that does create some incentive to publicly disclose the results so that the patch would be maintained by the public. However, there would also be an incentive to keep the patch private for the very reason that public disclosure would potentially help competitors, and I think that companies in a predominantly capitalistic society would not want to give their competitors the benefit of protections afforded by a patch that they themselves had to pay to discover and fix. Instead, keeping the patch private affords them protection, as well as the potential that their competitors might be exploited (not by them, of course…) and fined. In summary, I agree that economic sanctions on companies who “allow” themselves to be breached would create incentives for them to be safer, but I disagree that society would readily reap the benefits by getting more upstream patches and better code. This doesn’t even take into consideration the fact that some companies would simply accept the risk of sanction and do nothing to prevent exploitation. Exploitation and fines aren’t guaranteed, just a potential risk, therefore the associated costs are intangible. The cost of paying someone to look for flaws and fix them is entirely tangible. As a result, I believe that many companies would simply look to spend their money elsewhere where the benefits would be more easily compared to the costs (advertising, public relations, lead generation, improved operational efficiency, etc.). Ryan: How about the 2 big elephants in the room: 1) Why does adding a small commit to a non-security related part of my code endanger my entire application? 2) Why does running application X on potentially malicious


Thanks to what I’ll charitably describe as “unbound creativity” the fast and dumb and un-modifying design of the Internet has made way to a hodge podge of proxies and routers and “smart” middle boxes that do who knows what. Protocol design is miserable, nothing is elegant. Anyone who’s spent a day or two trying to make P2P VoIP work on the modern Internet discovers very quickly why Skype was worth billions. It worked, no matter what. Anyway, in an alternate universe TLS heartbeats (with full ping functionality) are a beloved security feature of the protocol as they’re the key to constant time, constant bandwidth tunneling of data over TLS without horrifying application layer hacks. As is, they’re tremendously useful for keeping sessions alive, a thing I’d expect hackers with even a mild amount of experience with remote shells to appreciate. The Internet is moving to long lived sessions, as all Gmail users can attest to. KeepAlives keep long lived things working. SSH has been supporting protocol layer KeepAlives forever, as can be seen: http://www.kehlet.cx/articles/129.html http://patrickmylund.com/blog/how-tokeep-alive-ssh-sessions/ https://drupal.star.bnl.gov/STAR/comp/ sofi/facility-access/ssh-stable-con https://pthree.org/2008/04/16/keepingyour-ssh-connection-alive/ The takeaway here is not “If only we hadn’t added ping, this wouldn’t have happened.” The true lesson is, “If only we hadn’t added anything at all, this wouldn’t have happened.” In other words, if we can’t even securely implement Ping, how could we ever demand “important” changes? Those changes tend to be much more fiddly, much more complicated, much riskier. But if we can’t even securely add this line of code: if (1 + 2 + payload + 16 > s->s3>rrec.length) I know Neel Mehta. I really like Neel Mehta. It shouldn’t take absolute heroism, one of the smartest guys in our community, and three years for somebody to notice a flaw when there’s a straight up length field in the patch. And that, I think, is a major and unspoken component of the panic around Heartbleed. The OpenSSL dev shouldn’t have written this (on New Years Eve, at 1AM apparently). His co-authors and release engineers shouldn’t have let it through. The distros should have noticed.

inputs put my entire machine at risk (or even affect unrelated components)? Instead of throwing money at stopping never-ending instances of these problems (such code-reviews that would have prevented heartbleed and a potential decompression vulnerability), we need to take a step back and determine the best way to tackle these *classes* of problems. David Collier Brown Anyone communicating with application X is able to benefit from bugs in X, which sounds like (i) not everyone should be privileged to receive from X, and (ii) not all parts of X should have the same privileges to X’s data. Add to this that is the fact that this particular X is managing data for Y, Z W and numerous others, none of which should be retained except under stringent confidentiality controls… Regrettably, the proposed solutions to these have dis functions of their own… Lennie I don’t know enough of the TLS protocol by heart to know, but would it be possible to do process separation ? Where you have 3 processes: 1 that produces the data. 2nd that handles encryption of that data and a 3rd that handles the details of the TLS-protocol with only seeing the encrypted data. My guess would be no, but could something like TLS 1.3 support it ? David Collier Brown:I don’t think the protocol would particularly constrain the structure of the program. We do arguably need a pattern and/or architecture of separation of concerns in security-involved programs… Ryan Any architecture would require abstractions, but C is un sutiable for providing these abstractions since it doesn’t enfore them (for example, a bounds violation let’s an attacker potentially read unrelated pieces of memory; in some cases it lets him run arbitrary code). Lennie’s solution of multi-processing is 1 step up since processes do enfore abstraction of seperate memory layouts (i.e., by default process A can’t read processes B memory), so if done correctly, it could have prevented heartbleed. That said, if your language requires you to isolate every part of your program in a seperate process to achieve isolation, the development/ run-time overhead is a lot higher (probably unacceptable). At this point, it’s time to use a language that enforces abstraction? Lennie I guess it’s a similar problem to offloading encryption to hardware. If only the encryption part is offloaded to an other process then at least the private key can never be leaked in a similar fashion to this. PX As an eternal optimist, I’m obliged to point out the bright side of all this, which is the fact that we are actually *finding* bugs like these now, rather than having them littered across all of our most trusted software stacks for eternity. I do think that this particular finding came as the result of increased interest, due to bug bounty programs sponsored by Google, Facebook, Microsoft, etc. https://hackerone.com/ibb http://googleonlinesecurity.blogspot. com/2013/10/going-beyond-vulnerabilityrewards.html Yes I’m aware that Neel is a Google employee, and that he donated his entire 15k bounty from IBB to the Freedom of The Press Foundation, so clearly cash wasn’t his primary motivation, but the fact that major companies are supporting these programs

definitely adds an extra air of prestige to the findings. What we should be doing is encouraging more big companies to join in and start rewarding those who make the Internet more secure for everyone. Hopefully this trend of rewarding researchers continues, or even grows, because as it does, stupid bugs like this will become more and more rare, and the general public can start feeling somewhat safe in their daily use of modern technology. Howard B. Golden Mr. Kamp, what you wrote needed to be said. Thank you for saying it. I have been thinking about a “quality improvement institute, factory and service” that might help. Open source is well established now in industry, and the problems are apparent. However, the benefits are seen as even greater. This creates an opportunity to improve existing open source software and future development and maintenance processes. (Not surprisingly, this same opportunity exists for closed source software.) Therefore, there should be a willingness on the part of government and industry to fund quality improvement. (I will not specify whether such organization(s) are for-profit or not-for-profit, but their products would need to be open for use without compensation for them to benefit open source developers.) Several approaches should be pursued: (1) Improved training and education for developers; (2) Identification and promulgation of better or best practices (in choices of languages, tools and development processes); (3) Identification of existing software most in need of remediation or replacement; (4) Funding remediation/replacement of the most important needs; (5) Outreach to existing projects with offers to help adopt better practices; (6) Outreach to academia to encourage training in better practices; (7) Advice for starting new projects following best practices appropriate for the project size; (8) Funding development of tools to support better or best practices. (9) Review of existing tools and offering constructive suggestions for improvement, perhaps with funding. I would be happy to elaborate on these ideas with anyone interested. Kune This whole situation reminds me of the poor fire protection during the middle ages. Strasbourg burned down eight times in the 14th century. Later regulations were written but not enforced, a factor of the Great Fire in London 1666, which destroyed the homes of 70,000 of the 80,000 inhabitants. The issue is that there are industries that know how to write secure code. Companies that provide the software-controlled brakes for modern cars have processes to prevent bugs that would cost them millions if their customers would have to recall cars to fix those bugs. Writing code in those environments is not fun, but for what reason should the production of software fun if several hundred millions of people depend on it. However my expectation is that the IT/Cyber industry needs to be burned or bled several times more, before the government will outlaw the use of memcpy on protocol level functions and enforce such rules. Access


Somebody should have been watching the till, at least this one particular till, and it seems nobody was. Nobody publicly, anyway. If we’re going to fix the Internet, if we’re going to really change things, we’re going to need the freedom to do a lot more dramatic changes than just Ping over TLS. We have to be able to manage more; we’re failing at less. There’s a lot of rigamarole around defense in depth, other languages that OpenSSL could be written in, “provable software”, etc. Everyone, myself included, has some toy that would have fixed this. But you know, word from the Wall Street Journal is that there have been all of $841 in donations to the OpenSSL project to address this matter. We are building the most important technologies for the global economy on shockingly underfunded infrastructure. We are truly living through Code in the Age of Cholera. Professor Matthew Green of Johns Hopkins University recently commented that he’s been running around telling the world for some time that OpenSSL is Critical Infrastructure. He’s right. The conclusion is resisted strongly, because you cannot imagine the regulatory hassles normally involved with traditionally being deemed Critical Infrastructure. A world where SSL stacks have to be audited and operated against such standards is a world that doesn’t run SSL stacks at all. And so, finally, we end up with what to learn from Heartbleed. First, we need a new model of Critical Infrastructure protection, one that dedicates real financial resources to the safety and stability of the code our global economy depends on – without attempting to regulate that code to death. And second, we need to identify that code. When I said that we expected better of OpenSSL, it’s not merely that there’s some sense that security-driven code should be of higher quality. (OpenSSL is legendary for being considered a mess, internally.) It’s that the number of systems that depend on it, and then expose that dependency to the outside world, are considerable. This is security’s largest contributed dependency, but it’s not necessarily the software ecosystem’s largest dependency. Many, maybe even more systems depend on web servers like Apache, nginx, and IIS. We fear vulnerabilities significantly more in libz than libbz2 than libxz, because more servers will decompress untrusted gzip over bzip2 over xz. Vulnerabilities are not always in obvious places – people underestimate just how exposed things like libxml and libcurl and libjpeg are. And as HD Moore showed me some time ago, the embedded space is its own universe of pain, with 90’s bugs covering entire countries. If we accept that a software dependency becomes Critical Infrastructure at some level of economic dependency, the game becomes identifying those dependencies, and delivering direct technical and even financial support. What are the one million most important lines of code that are reachable by attackers, and least covered by defenders? (The browsers, for example, are very reachable by attackers but actually defended pretty zealously – FFMPEG public is not FFMPEG in Chrome.) Note that not all code, even in the same project, is equally exposed. It’s tempting to say it’s a needle in a haystack. But I promise you this: Anybody patches Linux/net/ipv4/ tcp_input.c (which handles inbound network for Linux), a hundred alerts are fired and many of them are not to individuals anyone would call friendly. One guy, one night, patched OpenSSL. Not enough defenders noticed. We fix that, or this happens again. And again. And again. No more accidental finds. The stakes are just too high.

to packet content should always happen using functions operating on packet buffer abstractions. This requires however proper documentation of the internal APIs. Dan Kaminsky My understanding is that the Toyota brake failures were caused by a stack overflow virulent enough to occur w/o any external influence (i.e. network attackers). Gilbert Doing complete code reviews for security takes times and is hard. The OpenBSD developers did one, years ago. They took their code base, and did huge code reviews. By doing so, they found countless “bugs” and soon realized that most bugs could have become holes in security. Not only doing good programming and reviews of code when committing is important (using automated tools there to look for possible buffer overflows is good) but you need to check the whole code base too. OpenBSD experience shows it gives good results and rewards, but it takes a long time to do, and requires experienced people. And you must do it when well awake and attentive. If you do this for too long, you will have bugs right into your eyeballs you won’t see. Using tools to look for vulnerabilities is a start perhaps ? DK Do you think certifiable software (i.e. software that can be mathematically proven to be bug-free) will ever, pragmatically speaking, be a part of the solution? I.e. the Flint project at Yale. Dan Kaminsky The problem is that it’s deeply tempting to define your problem such as to make the math tractable, but the actual delivered security unknown. You can very correctly do the wrong thing. Rilind Shamku This is shocking! Very very trivial length check can cause all this mayhem. Anyone that’s read a few C code security books would have been able to avoid this fundamental error. This is not a C issue; it is definitely a human error. C was designed for efficiency and to give programmer more power especially around memory allocation. I can not believe that a crucial project for the internet such as, OpenSSL would let this fly by without checks. I understand that the project is underfunded (ridiculous, obviously giant internet companies don’t care about investing in security), etc… but still a simple source code analysis session would have been able to find this coding error. This leads me to believe that there might be more trivial errors like this one in the code. So with all that said, I believe it is very crucial that we spend the same exact amount of time that we use to develop code in developing source code analysis tools to check our code. Source code analysis is the first process of any code reviews. I hope this is the only major vulnerability with OpenSSL. Craig According to comments all over the Internet, Heartbleed could have been prevented with: more eyeballs, better code reviews, banning , more auditing, and/or automated code analysis to save us from ourselves. To this discussion let me add two words I rarely see mentioned: unit tests. Most useful, long-lived code is simply too complex for humans to find critical problems simply by eyeballing it. Imagine if there were a testing framework already in place how much more likely it would have been to ask, “What happens if these two numbers which are supposed to be the same were actually different?” Although they cannot guarantee a better outcome, unit tests establish a framework for testing failure hypotheses and catching regressions. I will note that the original code submission did not include tests. Neither did the fix. And overall, OpenSSL appears not well structured for unit testing, so it’s not right to blame the patch authors for not re-architecting the entire library just to submit their changes. Instead, the responsibility is on the primary maintainers to cultivate testable code, and require tests with every bug fix or new feature. None of us is looking forward to the next critical bug in OpenSSL or elsewhere in OpenSourceLandia. I don’t know where it will be, but I can tell you how to place your bets: it will be in code that is complex, in widespread use, and lacks a solid test suite. Writing tests is not sexy, and most devlopers don’t want to do it. Their code works, so why write redundant code to prove that it works?! Besides, unit tests are not shipped. etc... The point is that unit tests are a gift to the future engineer (maybe yourself) which helps prevent your work from being broken by someone who doesn’t understand or remember all the nuances. I am not a test engineer. However, I am a software engineer who has come to strongly appreciate the value of unit tests, and for designing code which can be unit tested.


Is Bigger Budget An Adequate Measure Of Security Efficacy? B

igger budgets - the envy of security professionals and the scourge of CISOs the world over. While we’d all like bigger budgets to make security better within our organizations, getting more money to spend isn’t necessarily a harbinger of goodness to come. Earlier a fantastic conversation broke out on Twitter, where else, and it started with this tweet from Tony Vargas retweeted by Adrian Sanabria: The conversation got a little snarky about how throwing money at a problem clearly doesn’t indicate that it’ll get any more attention or be any closer to being solved. I then made a comment about the American budget and how spending more isn’t really helping there - OK that’s a stretch but the parallels are clear, I think. Stephen Coplan made an interesting point which I’ve seen made many, many times - but I believe it to be false: *point of clarification - Stephen pointed out that he’s not implying more money equals more efficacy, and I don’t intend to represent his comments as such. I personally do not believe a bigger budget means anything specifically, so to equate higher budget with more relevance- I believe that to be false. I have personally witnessed first-hand how organizations take budget increases to spend wildly on necessary widgets, and then fail to operationalize. Security

isn’t about spending more, it never has been. In fact, the rapid increase in spending generally means that something went publicly wrong and the budget-holders are trying to make a public display of their sensitivity to fix the issues. Unfortunately all too often these are simply that - public displays with little follow-through. I believe that rather than focus on how much more money an organization spends as a measure of their seriousness of addressing security issue, we should be focusing on resources. You see, resources is inclusive of everything necessary including the critical people aspect as well as the widgets and gadgets that come in 1U rack-mountable formats to address the issues. Better security comes from better training of existing resources, more executive backing, better communications, and more operational support. Better security comes from a shift in culture, and a willingness by security professionals to reach to the business side and align better to goals and needs, and the business folks making a concerted and serious effort to understand that security issues and breaches aren’t just web site defacements anymore. Security (or rather the criminal aspect of the game) is big business with highly industrialized and specialized trades and vertical markets. Addressing security as a technology problem will lead to more breaches, more lost

revenue, productivity, shareholder value and trade secrets to name a few of the obvious. Security isn’t a “their problem” anymore, in fact it never has been. If you’re at all paying attention to the absolute worst-case scenario that Sony Pictures is living through right now (Steve Ragan at CSO is churning out an excellent series on the matter, I highly recommend you give it a read) you are becoming painfully aware that we’re past business disruption, web site defacements and DDoS. We’re into business destruction of the kind that has the potential to cost a company hundreds of millions of dollars not just today, but for years to come. What will it take for companies to take security seriously, and how will we measure that jump? I don’t think the upward delta in budget size is the only indicator here. I believe we need to look at the overall resource allocation to understand whether security is being addressed as a cultural issue in the company, or whether we’re just given more capital to buy shiny widgets with. In the end, Casey John Ellis had the tweet that made our point eloquently. I think he said it best when it comes to the ability to “buy more stuff” for CISOs, in relation to that making a positive program-level impact on the organization...and this, my friends, about sums up my feelings on the matter.



V

ishing, or eliciting information over the phone, is a common social attack vector. It’s proven to be one of the most successful methods of gaining information needed to breach an organization, even when used by an inexperienced attacker. When you can’t hack your way through your pentest, when you can’t break in with your red-team, when your phishes are blocked or ignored…simply call someone up and get the info you need. The question now being, how do we succeed? Let’s go over five easy points that contribute to successful vishing. VISHING For our first point, let’s look to something Sun Tzu is often quoted as saying, “It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles…” A common, cliched saying, but very true in this context. Knowing as much about your target or targets beforehand is key. Every little piece of information can hold value in your engagement. Collect, categorize, and have your information ready during your calls. Secondly, your pretext is vitally important to a successful engagement. There are many articles on formulating a good pretext, such as the one found on our framework in the Pretexting Section. This post may help you understand some basics to keep in mind. For instance, make sure any pretext you choose is going to match up with how you come across. If you are a young male, it is going to be very

difficult to play the part of someone’s grandmother over the phone, and while it may be possible using other vectors, and even highly successful as a pretext, it’s most likely not going to be the best one for you. Also, it’s better to have a pretext that is something you are comfortable with, if not one with which you are familiar. Best case is having either experience in the field of your pretext or to have the pretext suit your personality or skills. This will allow you to adopt it more fully and carry it out more believably. Once you have your pretext in mind, become the pretext, act like the pretext, and talk like the pretext throughout the entire engagement. Our third topic centers on another principle of influence, Commitment and Consistency. Once we are that person, we stay that person through both success or shutdown. The moment we break character is the moment red flags go up in our target’s mind. They don’t know we aren’t who we say we are and they often don’t know the value of the information they hold. This allows us to draw attention away from security awareness and be successful in becoming someone they trust. We maintain character despite everything they ask, throw, or say to us. The fourth piece to our vishing puzzle is flexibility. There will be times when our information comes up short or our pretext is questioned and even the most consistent attacker ends up raising some flag in the target’s mind. We need to be flexible while being

able to hold character without losing our cool, acting and responding as the pretext would. Our targets are human beings capable of infinite diversity and chaos. Things happen and we must adapt. Our final, and maybe the most important part of this process, is documentation. Good documentation and notes are key. If at all possible, take notes either in physical, digital, or audio form while on your call. After a successful attack (or especially an unsuccessful one) notes can be very valuable to review. Documentation can provide key pieces of information for future attacks or provide hints as to why we succeeded or failed. A 5 pieces recap to vishing success: 1. Knowledge is power so get as much of it as you can before you pick up the phone. 2. A believable and appropriate pretext that you can pull off. 3. Stick with your pretext, commit, and be consistent. 4. Be flexible. Adapt without breaking character. 5. Documentation. Take it seriously. These five aspects will lead to success as a professional visher and can even lead to success despite “failures.” If you follow these principles you will find yourself growing to be an adept and proficient visherman. So, ‘til next time… “Gone Vishing.” Aggregated from Social-Engineer.Org


Stopping Cyber Heists Via Customer Devices I

n online banking and payments, customers’ PCs have become the Achilles’ heel of the financial industry as cyber-crooks remotely take control of the computers to make unauthorized funds transfers, often to faraway places. That’s what happened to the town of Poughkeepsie in New York earlier this year to the tune of $378,000 carried out in four unauthorized funds transfers from the town’s account at TD Bank. First discovered in January, the town was able to finally get the full lost amount restored by March, according to public records, through tense interaction with the bank. Though the town declines to discuss the matter, this highvalue cyber heist, along with a slew of other incidents in the past year, has many bank officials worried. They’re concerned that the customer desktop, especially in business banking where dollar amounts are high, is increasingly the weak link in the chain of trust. Schools and churches aren’t immune, either. One cyber crime department, late last year, reported gets several new victim complaints each week. And businesses should be even more worried than consumers about whether banks will restore monies stolen by cyber crooks exploiting compromised computers using botnet-controlled malware.. Disputes over hijacked computers and fraudulent transfers are erupting into the public eye as businesses quarrel with their banks over who is at fault when a cyber-gang manages to make off with the money. The restoration of lost funds occurs on a case-by-case basis. The dilemma for banks boils down to this: How far can they go to help protect customer desktops that function like part of their shared network but aren’t owned by the bank? Banks are faced with the prospect that customers own PCs that have been in the hands of crime syndicates. Many banks finds itself getting more involved in helping customers defend their machines. One recent step has been making available for free specialized protective software for use by the bank’s 100,000 e-banking customers. If a problem is detected, the bank will call them and tell them. Cyber crooks would rather target high-dollar automated clearinghouse (ACH) transfers and other substantial payment transfers from business customers, but they wouldn’t turn down what they might be able to get from

consumers doing online e-banking. No bank is immune from being faced with these ACH issues regarding a computer malware attack. Hackers or crackers or online criminals—whatever you call them—are working every minute of every day creating new malware that scam users more effectively. But most of these threats are new arrangements of old tunes. So with the right protection and a little savvy, you can avoid just about every digital threat you face. What is the threat? The first malware typically vandalized PCs and destroyed files. But since the early 2000s, the primary motive for malware creation has been profit. Online criminals are after your banking information or your credit card numbers or your computer’s processing power. In the worst case scenario, they may even be after private information that could be used to extort you or harm your business. And these sorts of attacks generally have both financial and psychological costs. The true costs of malware can include your data, your content, your time, your effort and your heartache. Want a few examples of malware mayhem? Spyware tracks you for advertising purposes and slows down your computer doing so. Keyloggers monitors your every keystroke in an effort to steal your credit card information. Scareware imitates anti-virus software in an attempt to extort a quick payment from you. Ransomware takes your files and demand a ransom in return. Rootkits can turn you PC into a zombie computer and use it to send out email spam or to host illegal materials or to join in attacks on websites. There’s even a malware that exists just to create a fake login page for your bank’s website to steal your account information. How does malware end up on your PC? Generally you end up with malware because you installed it. And whether you installed it or not, malware can only work if the program runs without being shut down or deleted. Malware works like most scams — it requires some conscious or unconscious help from its victims. Thus online criminals have to know more than computer coding. They have to know what mistakes users are likely to make so they trick people who have probably NEVER fallen for a scam in


Providing online bank customers with security software an imperfect cybercrime antidote their real lives. And the best criminals can convince even cautious users to make one wrong click.

out immediately and run a scan of your Internet security software. You can also use our Online Scanner for free.

How do you end up downloading and installing malware? Sometimes malware gets packaged in with more legitimate software. Sometimes simply clicking on a fake error message can trigger a drive-by malware download. And you’re certainly aware of the classic method of disguising malware in an email attachment.

Avoid opening attachments at all unless you were expecting them and they come from a source you trust. If you can’t verify the source or feel anxious about a particular attachment yet have to open it, you can download it to your hard drive and have your updated Internet security scan the file before you open it.

You put yourself at risk when you’re downloading from a disreputable source or a peer-to-peer network of strangers. Seeking out “free” stuff on the Internet can cost you time and money. And this is especially true if you’re browsing and downloading without proper protection.

3. Remove spam from your life

How to protect yourself from malware 1. Make sure your PC is updated and secure The software on your PC isn’t perfect. It may contain exploits or security holes that make it possible for your machine to be infected easily. Software companies know their programs are not perfect. That’s why they release updates. Microsoft, Apple and Adobe all release dozens of updates every year. You need to make sure you have these updates on your applications running or you’re increasing your risk of infection. Our free Health Check software is a quick and easy way to make sure your PC is protected. Of course, we also recommend always running updated Internet security that includes anti-virus, spyware and firewall. Browsing Protection is another layer of security that can keep you from clicking on the wrong links. If you don’t have Browsing Protection, you can use ours. Check any link for free.

If you get a piece of spam, let your mail software know. Identify it as spam. You could also unsubscribe but unsubscribe link have been used on rare occasions to trigger a malware attack. Better to let your software handle it. If you have a friend on Facebook who spreads spam or bad apps, let them know. And if they continue spreading spam, unfriend them. You are responsible for your social network. Refuse to associate with those people who are not responsible for theirs. 4. Think thrice before installing any new software Installing software should never be an impulse decision. Some people say think twice before downloading any software from a source you do not trust 100%. I say think three time. At the very least, Google the name of a product you want to install. If you’re at all uncertain about whether to click download, consult with a tech savvier friend or your company’s IT guy.

2. Be very skeptical of random pop-up windows, error messages and attachments

When you install software, you could invite in a nasty predator that won’t leave until it’s done some serious damage. So think about installing software with a bit of the same sort of caution you use when deciding to let someone into your home.

Modern browsers have reduced the burden of pop-up windows. But they do still exist. Most pop-ups are far more annoying than harmful. But you might think of pop-ups like broken windows into a neighborhood you were walking through at night. It’s a sign that you should be on guard.

5. Behave online as you would in real life There’s an old saying in my family: “Don’t go licking the floors of a hospital.” What it means is: “Use your common sense.” You have a natural sensor in your brain that tells you when something feels dicey or unsafe. Trust your gut.

Avoid clicking on any pop-ups that imitate your Windows error messages or error messages that come up when you try to close out of a page. (Force quit out of the program, if necessary.) If any software begins to install itself, close

With the right software running and a willingness to step back when you feel uncertainty, you keep your PC and your life malware free.


SecurX RSAC - ALM - SE 2015 Special Issue Advertising rate card This exclusive and unique opportunity will reach information security professionals through the SecurX Magazine RSAC-ALM-SE 2015 special edition. Advertisement choice include back cover, inside front, inside back, double spread, full page, 1/2 page or 1/3 page format for an A4 size. Distribution will be to all RSAC - ALM- SE 2015 attendees along with a 10,000 large corporate houses across verticals. This resource will offer a first look at the detailed 2015 agenda. A digital magazine version will be available on www. fsfora.in, and promoted through social media after distribution. Report Sponsorship This media invites the RSAC Report Sponsorship, which will include Front page Ad, 6 ads inside, branding on report editorial pages, 1 X 2 page case study and one A4 size interview of the business leader. FC @ ` 250,000 BC @ ` 150,000 IFC/IBC @ ` 75,000 DS @ ` 110,000 A4 Ad = ` 50,500 1 / 2 page @ ` 30,000 1/3 page @ ` 25,000 Magazine Belly Band @ ` 170,000 Your company’s logo and message will be prominently displayed on an eye-catching belly band wrapped vertically around the magazine. Deadline for contract, payment is June 15, 2015 and materials deadline is June 20, 2015.


A CIO’s experience

Y

ou will have heard about programs that perform automated security scanning for website safety assessments. Such scanning software was developed in response to international standards such as PCI DSS and the security requirements they specify. While these scanners may be familiar to e-commerce firms, for owners of businesses where no such standards apply, the idea of security scanners may be new.

and excludes false positives? Many IT security standards provide the answer and suggest using code reviews and penetration tests. The only problem with this approach is the price – it can be extremely high. There are few qualified professionals who can conduct code reviews and penetration tests reliably and such professionals are expensive.

There are many broadly similar security scanners available as a software or SaaS, and for the uninitiated it can be difficult to understand the differences or their strengths and weaknesses. Further, despite their apparent simplicity, for organisations that do not have a professional information security officer it can be incredibly difficult to make effective use of these systems and the reports they generate. It seems so simple: launch or order the scanner service, get the report and pass it to the development team for bug fixing. So what is the problem? There are actually two:

During my eighteen years in IT and IT-security, I have made use of many types of security and scanning services and have had the chance to compare the results from automatic scanners, hybrid scanners and penetration testing. Here I share three examples of using website security scanning software.

Not all scanners are created equal

Automatic scanners almost always report vulnerabilities that don’t actually exist on the website (false positives). Sadly, the more “clever” a scanner is, the longer it scans and the more false positive results are likely to be reported.

When conducting a website assessment in 2013 we tried web security solutions from both Qualys and High-Tech Bridge. The output of these scans were a report from Qualys (100 pages) and one from High-Tech Bridge’s ImmuniWeb (15 pages). It was easy for me to read and understand each report, but knowing the shortcomings of automated scanners I was aware that the website could have multiple (critical) security vulnerabilities that the automatic scanners would not have found. The two solutions take a totally different approach: while Qualys is a fully automated scanner, High-Tech Bridge’s ImmuniWeb is a hybrid solution where the automated scanner is guided by a real person and completed by manual penetration testing by a security professional.

So, can you get a report that reveals all the vulnerabilities

In recent years, we found when scanning websites, that

Just like the automated antivirus programs we run on our desktops, automatic website scanners do not always discover all vulnerabilities. That said, if the website is very simple it is likely that the scanner will indeed find all the vulnerabilities, but for more complex websites such effectiveness cannot be guaranteed.


the Qualys scanner would stop responding. If, as we were, you are chasing standards compliance this can be a major headache because you are left without a compliance report or even some information that helps you understand the security level of the site. Of course there is technical support provided by scanner vendors – the last time I needed technical support from Qualys it took me about a month to get the issue resolved. High-Tech Bridge’s Portal support replied within a few hours. On another occasion we assessed a medium-sized website using IBM Rational AppScan. The final document from AppScan came to 850 pages and listed 36 vulnerabilities. Analysing the entire 850 page report and checking the website cost our developers about a month of effort and ultimately they reported that these vulnerabilities were not actually exploitable. Next, we ordered expensive manual penetration testing from a German company, the results of which showed that none of the vulnerabilities reported by AppScan existed and were all false-positives (needless to say, the testing cost a lot of money). Finally, we ordered an ImmuniWeb assessment for 639 USD (now the price is 990 USD). The assessment had only one recommendation, to use a trusted SSL certificate – a recommendation echoed by the developers and testers who conducted the penetration tests. This is a very good example how automated solutions can waste your time and money even if your web applications are safe. How intelligent are security scanners? A security professional reading a report generated by automatic scanners will recognise that the way these scanners work is through pattern matching. What’s wrong with that? Well, it means that any substantial deviation from the template will miss the vulnerability. A website owner should be aware that there are programmers who will leave vulnerabilities in the code on purpose, and some do it in a way that the scanners cannot detect. Even the most advanced automatic scanners need to match against a huge number of templates – this is probably why so many scanners take such a long time to complete a website scan. Pattern matching automated scanners have much in common with antivirus software. With antivirus software, the icon on your computer does not mean that there is no virus on your PC – it just means that the antivirus hasn’t recognised any viruses on your PC. The success of antivirus and automatic scanners depends on many factors such as the relevance of the software and pattern matching databases together with some mechanism for concluding that vulnerabilities (or viruses) are present. So, to be truly effective the fully-automated approach needs to be supplemented by an IT-security expert who can add human intelligence and professional experience to the process and ultimately give confidence that vulnerabilities will not go unnoticed during a security scan. Adding human intelligence is what Swiss company HighTech Bridge did with its hybrid scanning approach1. Its innovative SaaS called ImmuniWeb combines automated scanning with manual testing – the scanning is done by a program and at the same time the results of the scanner are checked and completed by a professional who is qualified

to carry out penetration tests. This expert can refine tasks for scanning immediately based on the website being assessed – eliminating false positive from the scanner report due to the involvement of the expert. Moreover, manual penetration testing guarantees the highest detection rate of vulnerabilities. It is interesting to note that the results of the low-cost hybrid assessment and expensive professional penetration tests, in certain cases, are the same. For example, say an open source based platform is used for the website. The expert is already aware of the known vulnerabilities of the platform at the time of scanning. So in the case of the hybrid approach, the expert need only find out the version of the platform being used and check its settings. Thus, the report will be specific to the platform used and contain only information relating to its vulnerabilities that really exist and are exploitable. If you have decided to check the security of your website quickly and economically, then you need to decide which scanner to choose: automatic with a huge report that in practice is never read until the end of the process or a hybrid with a brief report containing recommendations verified and completed by a security expert. What is the best way to check if your web site is secure? For firms building new web sites or updating existing ones, here’s a list of factors to consider: The specification you give to the developer should be prepared with the security in mind. The website developer: Should have good understanding of secure software development lifecycle, Pass regular web security trainings , Perform obligatory code reviews handled internally or by a third party company, Established IT-security processes in the company. Software testing at all stages includes testing for security issues. Ongoing maintenance of the website including improvements and updates, etc. Ensuring a credible and effective response to hacking, DoS, DDoS attacks. The infrastructure of your website should be properly protected. Beware of trusting your server host to secure your website. Hosting companies often make much noise about their security services (usually limited to one or more antivirus and malware-detection programs). However, such measures reduce the risk of an infrastructure breach but are absolutely insufficient for protecting a website as a separate software package. So when you need to check the security of your website, it means that you need scans and penetration tests of your web application not the infrastructure (that is also vital for website security but there is not so much vectors of infrastructure hacking and hardening). By the way, infrastructure security should be fully checked and assured by the hosting company – make sure it is mentioned in your contract. Author: Aleksandr Kirpo, CSO of Ukranian Processing Center – www.upc.ua – the largest Visa/Master/AMEX processing centre in Ukraine that handles 90% of all credit card transactions in the country. He has been with the company since 2002. Aleksandr has been working in IT and IT-security service area since 1995, and graduated from Chernigov Technological University where he specialised in “Computers’ intelligence systems and networks”.


HeadsUp For WhatsApp O

TT messaging apps are big business. At the very start of 2015, the world’s biggest messaging app, WhatsApp, announced they were handling up to 30 billion messages a day. This is an impressive figure and a sign of the growth that messaging apps have experienced. However there are signs that their scale is beginning to attract unwanted attention. Namely criminals groups who have made it their business to spam on other messaging bearers like SMS, now seem to be moving or being pushed to do the same on OTT messaging apps. Let’s take a look at spam on the biggest messaging app;WhatsApp. WhatSpam A few weeks ago we monitored the below image-spam being received by Irish & UK WhatsApp subscribers in a wave of attacks. The spam itself was a investment advertisement from US numbers. This spam type in itself was not surprising, but what is surprising is how relatively limited WhatsApp spam has been in the past. However this seems to no longer be the case. Whats App Spam As well as this investment spam, which seemed to have been concentrated to a few waves, WhatsApp users in Europe are being targeted over the last few weeks with more constant spam attacks that have been directly seen on other bearers. The current most reported attack on social media is the fake handbag/luxury goods spam: The links in these messages lead to the respective websites, which sell fake copies of the goods mentioned: Fake Sites Images This spam, which has been reported from Chinese mobile

numbers, is very similar to the same type of spam which has been implicated in a Chinese originated iMessage spam attack in 2014 that affected primarily the US ,but also other countries. An example iMessage spam from July 2014 is below, which you can see is clearly similar to the WhatsApp examples. Due to the massive decline in the amount of SMS Spam in America, this attack gained prominence as it occupied a large percentage of the remaining spam being reported at the time. The presence of the same kind of attacks, clearly indicates that these types of spammers have decided to switch, or at least diversify onto WhatsApp. Another sign of cross-over of spam from one bearer to another was the reporting of mobile malware being spread by WhatsApp in the last week. In this analysis it was also reported the malware -termed SocialPush by Lookout - was being spread by Twitter, however, in addition we (AdaptiveMobile) also detected this same malware being spread by SMS – meaning the malware authors or other users of it decided to distribute it over popular multiple messaging systems regardless of type. Other WhatsApp spam types, such as porn-conversation ads[3][4] shown below have begun being received in the Middle East, primarily from Indian mobile numbers. While not the same criminal group, and so not directly connected, the method used matches the porn bot spammer group which operated originally on Yahoo and AIM, and is now present on Kik Messenger. The total scale of these individual spam attacks over WhatsApp is hard to tell, but if anything, it does seem clear that WhatsApp is joining the ranks of messaging systems which now have a functioning and active spam ecosystem, and the contributors to this spam are being affected by and coming from other messaging systems.


For A Few Rupees More While other regions are being affected by spammers gradually moving into WhatsApp, one country in particular has faced a massive influx on spammers moving onto the messaging app – for reasons that should have actually had no impact on WhatsApp. The country where WhatsApp spam seems the worse is India, and here it is increasing, bizarrely, due to government regulation. In September 2011 the TRAI’s (Telecom Regulatory Authority of India) anti-spam regulations for SMS came into being for mobile operators in India. This enforced fines against mobile operators for every single incident of sms spam reported by subscribers. While it took some time for these regulations to be implemented, the results in the last few years have been widely successful. In one Indian Mobile Operator that AdaptiveMobile are actively filtering in, sms spam reported has dropped by nearly 97% in 2014 alone (see below), and over 99% since filtering was introduced, with a ‘steady state’ being indicated for the last 10 months. To give another comparison, a net result is that the background rate of spam actually sent and blocked - in another Indian operator AdaptiveMobile is active in - is now roughly around 0.12%. This is over 350 times lower than China, which reported a rate of about 45% spam as a percentage of all messages in 2014. TRAI Figures However this success seems to have led to spammers in India changing tactics, and in this case, one of those tactics is to switch to send spam via WhatsApp. First reported in early 2014, recent news reports from India indicate that while operators there confirm they are now winning the fight against SMS spam, spam sent over internet based messaging such as WhatsApp is a major new front of unsolicited messaging. The type of unsolicited messaging covers many different types of spam, but primarily tend to be a whole range of unsolicited advertisements, such as below: Economically, it is now very cost-efficient to send WhatsApp spam in India. One report explains that prices for WhatsApp advertising text messages bought in bulk are now as low as 0.21 Rupees (around 0.3 of a US or Euro cent), and not much higher for image messages. In fact, just browsing the internet you can find even lower deals, here you can see offers for advertising WhatsApp messages at 0.18 Rupees. It’s interesting that on the same website SMS costs for the equivalent bulk deal are 0.09 Rupees, meaning WhatsApp spam is still twice as expensive to send as SMS spam. This may not be the case for long - the price (of WhatsApp advertising) was much higher in the past and will probably continue to drop. So what do you get for your extra 0.0015 dollars? Well for one its still more complicated for the spam provider to set up and send via WhatsApp, so those costs must be covered. But beyond that sending via WhatsApp allows advertisers using the ‘service’ to send longer messages, and images if required. However one main reason spammers are switching to send on WhatsApp is because they are exploiting a loophole in the anti-spam regulations. As an IP service, which users optionally sign up for, and not a ‘core telecom service’, WhatsApp is not covered by the Do-Not-Disturb requirements, leading to a thriving industry offering to send spam over WhatsApp. This fact is even pointed out by spammers spamming their services to those who which wish to advertise – see the example we have highlighted below – which clearly spells out the advantage of WhatsApp as being legally able to send to DND (do not disturb) numbers. Government intervention, it seems, has given a perfect reason for SMS spammers to move to WhatsApp in India.

Return to Sender The source of these spam messages is also useful in our analysis. One of the benefits with WhatsApp is the cost of sending international messages is irrelevant, and so the source number can be from anywhere. The same is the case for WhatsApp spam, with investment spam originating from the US but being received in Europe, luxury goods spam originating from China and also being received in Europe, and porn spam originating from India but being received in the Middle East. If we dive deeper into the numbers used, we can also see evidence of a more complex spamming structure emerge. From analysing the US numbers reported sending WhatsApp spam worldwide many of them belong to VoIP operators, meaning they can be assigned virtually. This is interesting as numbers that can be assigned virtually would be valuable for WhatsApp spam purposes, as in the case of WhatsApp account closures, spammers could simply use new VoIP virtual numbers to create and validate new accounts to continue sending WhatsApp spam. The use of VoIP numbers has been common in SMS Spam in the last 1 to 2 years in the US as ‘real’ numbers have become less attractive to send spam due to aggressive shutdowns. This reuse of the same methods from other messaging spam types – of using VoIP numbers - along with the same scams, means that the WhatsApp spammers are not ‘native’ spammers, but incoming groups who have operated on other types of messaging, and who come to WhatsApp with extensive experience. What is WhatsApp to do? Well, recent updates from Germany draws some attention to how WhatsApp now deals with spammers, with temporary exclusions being put in place if users send to too many users who do not have them as their contacts, and have been blocked by too many people in a short period of time. Some of these techniques are innovative and useful – as they use the ‘reporting’ of blocked users to give a reputation, and also by using the contacts uploaded by WhatsApp users as a form of validation. The principle behind this is that if both parties in a conversation have each other as a contact, then they should be permitted to send to each other. Unfortunately though, there are failings. The above methods are behaviour based and may generate ‘false positives’ (senders flagged as spammers that are not) occasionally. For example, if someone lost their phone, received a new number, and sent a WhatsApp message to all their old contacts, they might trigger the above restriction. This would be why these restrictions lead to temporary blocking of the WhatsApp account, and not permanent. Optimisation of these restrictions to prevent false positives is likely to be a long-term effort. More seriously, at the moment it is not possible to actually report the spam message content to WhatsApp, nor can users restrict WhatsApp messages to be received from contacts only – in effect forming a whitelist of approved senders. This 2nd point of not introducing a whitelist is probably a deliberate design decision to ensure that new WhatsApp users can contact people within the App, without having to resort to SMS or other apps. In addition, one security feature that WhatsApp have already implemented - End to End encryption - rules out several of the methods that messaging systems use to deal with spam. You Can Please Some of the People.. End-to-End encryption means that messages are encrypted in transit from one handset to another, without the WhatsApp servers routing the message or any other entity being capable of decrypting the messages in transit. While laudable, there are trade-offs based on this decision, and in this case it also means that spam filters within the WhatsApp servers cannot extract features from encrypted WhatsApp message content in order to apply anti-spam content logic on the messages. The discussion on the mix between end


to end encryption and anti-spam was covered well in this conversation, and is well worth a look. The end result is that it is very difficult to do content filtering of WhatsApp messages on the server-side, preventing the use of many of the techniques used in unencrypted messaging systems. However this need not be a critical loss, its problematic, but as the E2E conversation states there might be ways to do some feature extraction at the client, although these are likely to be infeasible or untrustworthy. Long-term, promising methods like - homomorphic encryption - an encryption approach that allows operations on encrypted values without having to decrypt the value first, may offer WhatsApp the ability to filter the encrypted content at their servers. While great strides have been made recently in this, its still likely to take many years before its ready for widespread use. For now though, WhatsApp (and any mobile based service) is still in a good position of having strong identity – namely phone numbers – on which it can base attribution, and all OTT messaging apps are in control of who has access or not. Plus it would be a mistake to think that WhatsApp is being ‘flooded’ by spam to the same extent of email. While only WhatsApp know the true level of spam in their ecosystem, there may be ways for us to gauge the exposure users have it, by using Google Trends. Below we have plotted a graph of searches in Google of the words “WhatsApp spam” v “SMS spam” from 2011 onwards. If one assumes the usefulness of Google Trends to infer what people search for to indicate that they have been affected (an assumption that has been proved problematic with Flu searches), then there are two ways to read this: One, WhatsApp, with 700 million users, compared to the World’s 4.6 billion mobile phone users - all capable of receiving SMS - is generating proportionally more searches for spam than would be expected. On the other hand, WhatsApp, with up to 30 billion messages being sent a day, versus SMS’s estimated 20 billion, is generating proportionally less searches for spam than would be expected. The truth of course, is probably somewhere in between. The fact the search terms are in English and the presence of peaks related to public events such as WhatsApp emailspam news articles means these trends must be taken cautiously. Trends like these are best if they are added to additional data but it’s clear even from the data we have that users at least are searching WhatsApp spam more frequently, and it’s on track to exceed the searches for SMS Spam by mid-summer 2016. This again indicates a shift in a spam ‘metric’ from SMS to WhatsApp. In any case the days of WhatsApp users assuming that they are immune from spam are drawing to an end, for the message is that as it becomes bigger, the more it is going to be a target for the spammers and criminals who have honed their skills on other, more established, messaging bearers. Old Spammers Never Die,,, For this discussion, we focused on WhatsApp, being the biggest OTT messaging App with a size of 700 million active monthly users, but we could have taken any of the main messaging Apps. The lesson is, that with a ‘pull’ factor of a growing user base, and with a ‘push’ factor of increased spam defences and (in some regions) government regulation on other bearers, the OTT messaging apps become more and more attractive to the established messaging criminal groups to ‘cross over’. Therefore these apps should be alert and prepared to implement the technologies and teams needed to deal with the threat before it has a chance to affect the service or users. For WhatsApp and others in 2015, the recommendation is to expect more ‘cross-overs’ from other messaging systems and build in security to stop them.


T

he fundamental principle that underpins all security is the need to stop bad people or processes while allowing good people or processes. So security is about access control; and access control starts with identity. But identity on its own is not enough – we also need to understand purpose. We need to identify the person or process, and decide whether the intent is good or bad.

of the files that you know to be good. But maintaining this whitelist is difficult. The problem comes when you have to register or re-register every DLL every time you install a new, or patch an existing, application. Which people do you allow to install their own software, and which people do you stop? And which bits of software can make changes and which can’t? It becomes more of an administrative rather than intellectual issue.

Consider passports in the physical world. They prove identity, but do not tell us intent: is the intent of that identity to do good or bad? We reinforce identity with lists of known intent: a whitelist of frequent flyers or VIPs whose intent is known to be good, and a blacklist of terrorists and bad people whose intent is known to be bad.

Whitelisting – which isn’t much different in principle to the integrity checking of yesteryear, requires more work by internal support teams and interferes with the end-users’ God-given right to install anything they like; which is more of a problem in some environments than in others.

Cyber security is the same: based on identity and intent we maintain whitelists of known good (or at least acceptable) behavior, and blacklists of known bad (or unacceptable) behavior. Security is largely based on how we use these lists. In the main, we either allow what’s on the whitelist and prevent everything else; or we prevent what’s on the blacklist, and allow everything else. We tend to concentrate on one approach or the other: whitelisting or blacklisting. Keeping our computers clean is a good example. In the beginning the anti-malware industry simply blacklisted the bad things. But now the alternative is gaining traction: whitelisting the good things. We need to know which is best for maximum security. In favor of blacklisting The basis of anti-virus security is a blacklist of all known malware. The technology is based on blacklisting because in the beginning there were very few viruses. A primary advantage of blacklisting is that it is conceptually simple to recognize a few bad things, stop them, and allow all else. A second argument in favor of blacklisting is ‘administrative ease’. The maintenance of blacklists is something we can delegate to trusted third parties – in this instance the antivirus companies. They in turn, particularly with the advent of the internet, can automatically update the blacklist for us. Basically, we don’t have to do anything. Whitelisting is different: it is difficult to delegate to a third party the decision on which applications we need. Whitelisting would be the perfect solution if people only have one computer that is never patched and never changed. Intellectually it makes perfect sense to only allow execution

That’s not to say that some people consider such delegation to be impossible. Last year Microsoft proposed a form of whitelisting for access to the internet; that is, only users with an internet health certificate for their computer should be allowed access. He has few supporters in the security industry. Power, again: “If computers were like televisions, with just one base operating system that was never changed, then it’s doable. But in the real world there are just so many variables associated with Windows and all the bits of software that have ever been written for Windows, that it’s almost impossible to be able to say what is and what is not a clean or healthy computer. However, some see a different problem with this type of whitelisting. An online trader would rather take the occasional fraudulent transaction than risk turning away a good transaction. So the thought of blocking a user from coming onto the internet until they are trusted would terrify many of the e-commerce providers who make their livelihood on the basis of the more users the better. Most from the e-commerce world would be lobbying very hard to put down this version of whitelisting. So one of the strongest arguments in favor of blacklisting is the problems concerned with whitelisting. In favor of whitelisting However there is a specific problem with blacklisting. Antivirus blacklisting is based on the idea of detecting things that are known to be bad and stopping them. But it simply cannot detect things that are bad, but not known. Zero-day threats are not known simply because they are zero-day threats – and blacklisting merely lets them in as if they were good. What we are seeing today, is a lot of targeted, covert


attacks – external infiltration into corporate networks with a view to the theft of valuable information using techniques that are specifically designed to evade blacklisting – and one possible response to zero-day threats is whitelisting. The sheer volume of malware is a problem for blacklisting. Blacklisting,is threat centric. Whitelisting is completely the opposite: it is trust centric. While blacklisting malware used to be adequate, the whole threat arena in the cyberworld has exploded to such an extent we now have to question whether blacklisting alone is still good enough. This is what Lumension does: it protects end-points (such

on just one (thus minimizing the chance of false positives). The anti-virus industry Whitelisting can and does work for businesses, though it works best where there’s an authoritarian IT culture, rather than laissez-faire: restricted privileges and so on. It does, at a stroke, obviate most of the risk from social-engineeringdependent threats. In fact, most AV nowadays does have some whitelisting ability, though how it’s done and to what extent varies enormously. As the amount of malware increases, at some point it could

as the PC on your desk) by making it administratively easy to create and maintain a whitelist of acceptable applications while supporting that with a blacklist of malware. If you look at the two things together, whitelisting should absolutely be the first line of defense for any organization, because it simply stops everything that isn’t approved. But what it cannot do is remove malware once it has embedded itself into a machine.

be more efficient to only allow whitelisted files to be run in an organization. The idea has been around for a while but many things have to be taken in consideration, like software updates (especially windows updates), remote users, smartphone and non-standard users. Ideally as well as using the vendor’s whitelist you could have a local whitelist too. So while the idea of having a, ‘trusted environment’ is very appealing, in practice it is difficult to achieve.

Bit9, like Lumension, concentrates on whitelisting. The premise of application whitelisting is very simple. You want to run on your system a much smaller set than what you don’t want. We apply this model to other aspects of security in our life. For example, who do you let into your home? You don’t keep a list of everyone bad in the world. Rather, you only allow whom you trust into your home.

Kaspersky, like other AV companies, is already looking into whitelisting. by running a whitelist program to collect information about all known good files, sent in by whitelist partners and also through our Kaspersky Security Network (KSN). This ‘neighborhood watch’ which users become part of when they install Kaspersky Internet Security. Information about all unknown files is sent to our ‘in the cloud’ service and automatically analyzed. If malicious, all computers within the network are protected. If it is not malicious it will be added to our whitelist. This has two benefits for our customers: it will reduce the risk of false positives, and will increase scan speeds. In this way they collect information – not the files themselves – about millions of files.”

What we’re seeing is that the explosion in malware (in excess of 2 million new pieces of malware every month) is exactly what makes us question whether blacklisting remains realistic. As a general rule, whitelisting is always more secure than blacklisting. But it requires you to think more about how software arrives on your systems and whether or not it is trustworthy. That’s why a software reputation database can be an invaluable aid in whitelisting – it provides a trust rating on software, like a trusted advisor or background security check service, that can make the process more manageable. If everything you run comes from a well-known third party, approving software almost exclusively from a cloud based reputation service can be enough. In most cases, however, you also have your only custom or proprietary software. An effective and robust whitelisting solution allows you to combine both your own policies along with those from a reputation database. So we should ask ourselves whether we can harness the power of cloud-based reputation systems to generate our whitelists. Spamina already uses this methodology to produce its blacklist of spam sources, calling on six separate reputation blacklists, but never relying

Whitelisting or blacklisting? So what’s our conclusion? Whitelisting is fundamentally the better security solution. If something isn’t on the list, it gets stopped – the default position for whitelisting is secure. But with blacklisting, if something isn’t on the list it gets allowed – the default position for blacklisting is insecure. Against this, the administrative effort involved in blacklisting is minimal compared to whitelisting; and the difference increases as the size of the whitelist increases. However, the efficiency of blacklisting decreases as its size increases. You could almost say that whitelisting is best with a small whitelist, while blacklisting is best with a small blacklist. However, since neither of these situations is likely to occur in the real world, our conclusion is simple: you need both.


From Organizations are increasingly concerned that historical industry bestpractices are being stressed by the acceleration of new malware and the increasing reports of compromise via stealthy targeted attacks.

INVITING SECURITY START-UPS Action Leaders Meet is aggregating top Indian Security professionals to a day long engagement at a star property in Mumbai. (Detailed list available for sponsors)

In our endeavour to support start-ups, we bring you a special engagement opportunity. We have identified a one hour slot, which we offer to start-ups divided into slots of minimum 1 minute X number of slots to deliver maximum impact at minimum investments. You can pick and choose the exact time you may need. To increase your visibility, we are supporting this with other inventory, at cost! See opportunity detailed alongside. LIMITED SLOTS!!!

TO ENGAGE WITH CXOs FROM ACROSS VERTICALS DELIVERABLES Presentation @INR 5000 / minute for 5 min slot Free addons 5 unit X 3 second freeze frame A/V slide between PPTs On 1/2pag colour ad in SecurX magazine 250 pt X 300 pt static box ad on SecurX for 1 year (creative may be changed 10 times) (or) Click able ad of same size for 3 months At cost 1 table+1 chair +1 power point @ INR 30.000 Table wrap flex with desired branding

Attackers are laser-focused leveraging indirect and multi-pronged exploits to steal data or wreak havoc. The insider threat has primarily morphed into phishing attacks which can leverage multiple internal security weaknesses and vulnerabilities to traverse the network and ex-filtrate data or intellectual property un-detected. At the same time the attack surface is broad since security is horizontal and increasingly distributed – it covers many threat vectors across all extended business functions and essential services throughout the whole multi-location information technology and building infrastructure. Chances are that when the infrastructure was originally deployed it was secure, clean and, well organized. But as weeks, months, and even years pass, tactical changes in technology and the IT environment have probably occurred, weakening the security posture and opening it up to attack. The result is that security infrastructure becomes much more complex and fragmented making it harder to protect. Attackers don’t discriminate and will take advantage of any gap in protection to reach their end goal. The bad guys continually evolve and innovate. All potential threat vectors need to be holistically examined and addressed across the extended enterprise. Without a proactive but practical security strategy and vital life cycle management processes in place – the system will inevitably become vulnerable and fail. Addressing Multi-Dimensional Threats

ACTIONLEADERS

The terms “advanced persistent threat” (APT) and “defense in depth” have been overhyped in the press and are distracting organizations from the real problem of managing targeted attacks in a rational and balanced fashion. Many organizations lack a complete


Barrier to Enabler understanding of defense in depth which limits budget and can lead to revenue impacting events. Many well-intended vendors seeking to position their solutions confuse the concept of defense in depth even further. Defense in depth requires a strategic security approach that is adaptive, establishes business-driven rules and — leverages people, process and systems harmoniously. Integration is vital as a holistic security management system. Evolving From Business Barrier to Enabler How do organizations cut through the hype, filter the noise – of fear, uncertainty and, doubt (FUD) and deal with real and present threats? How do organizations develop an affordable and practical defensible security posture that supports the business based upon available budget and resources and – enables it to grow competitively while managing risk and protecting critical assets? How do organizations develop a continuous cycle to consolidate, integrate and organize mission critical infrastructure into a sustainable core while still enabling healthy chaos = innovation and rapid deployment on the edge? The secret to success in security is typically simplicity, to have a welldesigned and organized infrastructure that provides the appropriate layer of controls while enabling users a consistent ‘policy managed’ experience regardless of location, network transport or device. The challenge is in achieving and maintaining that goal. Security done right is a business enabler that dramatically reduces total cost of ownership (TCO) providing a tangible Return on Security Investment (ROSI). IT complexity and fragmentation replaced by an adaptive modular and flexible architecture enables agility and improves your competitive edge — so the business can refocus quickly as new opportunities emerge.

ways they relate and interact with each other. It represents a strategic planning horizon and guide that defines the desired state of an organization’s infrastructure. The architecture sets the context for planning, design, and implementation. It enables a company to evolve and to become agile, multi-functional, and competitive, allowing the seamless adoption of new capabilities and applications into a common infrastructure. Security architecture also facilitates budgeting for security solutions and personnel. In summary, the security architecture provides: Evaluate applicability of new technologies, products, and services Framework for technology decision-making A macro view of IT systems and components, from the security perspective A statement of direction for IT Reduce and manage risk in the most cost-effective manner Facilitate compatibility and easier administration of systems Blueprint for future network growth Create and document consensus Methodology to force consideration of all design factors A guide for the creation of an enabling infrastructure for unforeseen new applications

“Security is a business enabler, you can drive faster with good brakes.”, Nigel Willson

Back to Basic Security Principles The primary purpose of creating a security architecture is to ensure that business strategy and IT security are aligned. As such, the security architecture allows traceability from the business strategy down to the underlying technology. However, many IT organizations have moved away from formal security architecture governance in favor of rapid deployment cycles and tactical changes which over time risk diverging into complexity and fragmentation – with unresolved security exceptions. As previously stated, complexity not only leads to insecurity and the increasing potential for human error but also increased cost of operations. A security architecture is a design document describing the security components that will protect the enterprise, and the

Adaptive Security Architecture Lifecycle The security architecture is used as a baseline for consensus and direction but it needs to be active and capable of being updated. This process allows the security architecture to adapt and be agile to support the needs of the business. It evolves and sets future objectives.

System technology and users, data and information in the systems, risks associated with the system, business drivers, and security requirements are ever-changing. Many types of changes affect security: technological developments (whether adopted by the system owner or available for use by others); connection to external networks; a change in the value or use of information; or the emergence of a new threat. Creating an adaptive modular architecture leads to agility and flexibility as the organization grows. At the same time, using the architecture to develop an annual plan sets the stage for the projects that need to occur that year, and the improvements begin to converge towards and track with the architecture. Finally, with the proactive asset, risk, and policy management and infrastructure improvements, the security-risk profile is also managed, resulting in risk reduction. In this manner, not only does the security architecture drive the IT and network infrastructure direction, but it also enables the illustration of tangible results, winning continued support for the program.


A

s more people are using the phrase “third platform”, I’ll assume it needs no introduction or explanation. The mobile workforce has been mobile for a few years now. And most organizations have moved critical services to cloud-based offerings. It’s not a prediction, it’s here. The two big components of the third platform are mobile and cloud.

Mobile I had posed the question “Is MAM Identity and Access Management’s next big thing?” and the answer is a resounding YES! Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they’re using. So, the challenge is to control the flow of information and restrict it to proper use. So, here’s a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it’s still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don’t want to turn control of their devices over to their employers. The age of enterprises controlling devices

went out the window with Blackberry’s market share. Containerization is where it’s at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn’t make much sense to me. As an alternate approach, let’s build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There’s no reason to have to manage mobile device access differently than desktop access. It’s the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they’ve been extended to provision Privileged Account rights. You don’t want or need separate silos.

Cloud The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform. There’s been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases,

it’s basic federation. In some cases, it’s ESSO-style form-fill. But there’s no magic to delivering SSO to SaaS apps. In fact, it’s typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)

My Point I guess if I had to boil this down, I’m really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we’re still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I’d want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn’t stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn’t want to introduce an MDM vendor to control access from mobile devices. The third platform simply extends the enterprise beyond the firewall. The concept isn’t new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that’s integrated with the enterprise WAM solution (especially valuable for consumerfacing apps). And all of this should be delivered on a single platform for Enterprise Access Management. That’s third-platform by MathewIAM. Flynn




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.