by Perry Lynch
4:00 min read | Audio
Everything in systems security ultimately is about protecting data. CIS Control #13, deals with data protection in its most direct sense. The main issues are identifying sensitive data, preventing its unauthorized transfer, detecting any such transfers, and making improperly acquired data as difficult to use as possible.
Identifying critical data
The first step is to identify the data that needs protection. Organizations generally have their data spread over multiple systems with varying levels of security. However, you can successfully protect this data through the use of several tools and techniques: Access control, encryption, integrity protection, and data loss prevention can be used together to identify, restrict, and protect any sensitive or mission-critical data.
A data classification process should be undertaken. Once data is properly classified and labeled as regulated, sensitive, confidential, or public, those files and folders should then be migrated to properly identified folders on the SAN, and group policy should be applied to ensure that access is limited to authorized staff members.
Databases and files with sensitive data should be kept on machines which aren't exposed to outside connections. Access to them should also be restricted to authorized users on the internal network as well, in a manner that’s consistent with business requirements.
Once sensitive data is adequately secured, routine network hygiene needs to take place: Many users will maintain bad habits and keep unsecured copies of sensitive data because it's convenient. Administrators should routinely use appropriate tools to scan desktops and non-secured folders on the SAN for cleartext that looks like sensitive data and alert the appropriate data owners.
Protection by (and from) encryption
Laptops and mobile devices are easily stolen, so if they hold any sensitive information, the entire device needs encryption. Mobile Device Management tools can be used to secure sensitive corporate data for corporate and user-owned phones and smart devices, without impeding the end user’s personal use of the device. Full Disk Encryption should be deployed for all corporate laptops, using a centralized key management system. This will prevent unauthorized users from being able to access the device and any data should the laptop become lost or stolen.
Within the enterprise, encryption is often required in databases and other systems on the network. Many databases contain sensitive fields that require encryption or hashing, independently of the whether the disk is encrypted. Other systems may require the entire database be encrypted.
Methods of encryption need periodic review. Some algorithms that were once considered strong, such as SHA-1, are now deprecated because of their weaknesses. Any data encrypted using them needs migration to a better algorithm.
Encryption is valuable, but it's a problem when it isn't supposed to be happening. If outgoing encrypted traffic is originating from unauthorized desktops, it could be evidence of malware sneaking the data out. Network monitoring software can detect and flag the use of SSH and other secure protocols outside of expected contexts. If they don't have a legitimate purpose, administrators need to track down their source and remove any malware responsible.
Encrypted exfiltration can also tunnel through harmless-looking packets, such as DNS requests. These are harder to detect, but application-level monitoring software can often identify them by characteristics like abnormally long data fields.
Monitoring data movement
Network monitoring can generally recognize dubious packets. These packets could be included in otherwise legitimate traffic, such as an email that carries sensitive information in cleartext. It could indicate malware is at work, but it might also indicate that users are making otherwise legitimate transfers in an insecure way.
This falls into the area of data loss prevention (DLP). Software systems for DLP take a variety of approaches for recognizing abnormal traffic. Most rely on pattern detection, so human verification is generally necessary. Other systems rely on fingerprinting previously-identified data and will operate effectively with a lower level of human intervention. In either case, the software needs to be configured so that the number of false positives is reasonably low and, all alerts will get the attention they need.
Known hostile IP addresses should be blocked and monitored, as attempts to reach them could indicate that malware is trying to send out sensitive data; other destinations could be attempted if the first one is unreachable.
Transferring data within the network is sometimes a concern. Copying sensitive information to mobile phones or portable storage devices increases the risk. It may be a good idea to configure machines to prevent those transfers.
A large part of data protection is simply knowing where the information is and where it's going. Keeping track of all sensitive data storage and limiting its movement are essential practices, and accomplishing that requires safe network configurations, monitoring of traffic, encryption of data, and prompt action when problems arise. Protection needs to be multi-layered, especially when leaks would cause serious harm.
by Perry Lynch
3:30 min read | Audio
Defending network boundaries is an increasingly complicated and difficult task. Cloud services, remote access, and mobile devices can make it difficult to identify the exact boundaries of a network. CIS Control #12, which deals with the defense of network boundaries, is correspondingly complex. It pays to remember that boundary protection isn't just a matter of securing the front lines, it’s also a major component in a layered defense strategy.
Managing the task
Securing the boundaries means paying attention to new threats and attack methods and evaluating them against the needs of the business. Achieving a balance between effective security and user needs will require frequent risk analysis and constant communication with upper management. By doing so you will enable enforcement of an effective and realistic security plan that supports the business needs of your network.
A well-structured network architecture includes not just a DMZ for the limited number of Internet-facing systems, but also specific security zones for internal servers, systems management workstations, and other business-critical systems or applications.
Network scanning is necessary to make sure no one attempts an end run around the proxy. These might come from malware or from impatient users trying to circumvent the rules. Unauthorized VPN connections might send encrypted traffic through the proxy and present a security risk even if its purpose is relatively innocent.
Decryption of network traffic should take place at the proxy level. That lets it apply application-level security on top of IP and port filtering. The proxy will use whitelisting or blacklisting to prevent connections to malicious servers. Whitelisting is safer, but it's difficult to maintain a complete list of approved domains and IP addresses without constantly adding to it. Blacklisting requires constant updating from services that list rogue addresses.
Both inbound and outbound traffic needs filtering. Only ports and protocols that are considered mission-critical should be permitted outbound through the firewall. Additionally, blocking access to known malicious domains will defeat many phishing attempts. If malware can't reach a command and control server, it becomes far less effective, and easier to eliminate.
Intrusion prevention and detection
Preventing unauthorized activities and catching them as they happen are crucial to boundary protection. The Intrusion Detection/Prevention Systems (IDS/IPS) should be configured to alert and/or stop a majority of attempts by catching suspicious traffic. Signature-based detection is the traditional approach, but sandboxing and other methods can be considered as supplemental tools to detect zero-day attacks.
Monitoring should record the headers of any suspicious packets, if not the whole packet. This information is valuable for event monitoring, so that the source of the problem (external or internal) can be identified. Analytics run on this information can turn up patterns that are too subtle to detect from a small sample.
Malicious traffic can piggyback on all kinds of protocols to escape notice. For instance, if large numbers of senseless DNS requests are being sent out, they may cloak communication with a hostile server. For this reason, DNS queries should only be permitted to trusted external servers, many of whom can provide filtering services to further limit the ability to introduce malware to the network.
Security would be simpler if the entire network were physically behind the router and firewall. However, most businesses find that allowing remote access increases productivity and improves employee satisfaction. The amount of control IT management can exercise over these devices is generally less.
The CIS control recommends requiring all remote access to use two-factor authentication for logins. If those devices fall into the wrong hands or if someone steals the password, an additional factor such as a token or a text message will make it harder for them to take advantage of it.
If the business lends devices for use outside the office, it should set up remote device management for them. This will ensure they stay up to date on patches and have a secure configuration. In the case of cell phones and other smart devices, it should include remote wiping. BYOD devices should meet company-set security standards before getting access.
Business partners that connect to the network can be a serious risk if they don't observe high security standards. The business needs to specify security standards which connected partners have to meet, then monitor their access.
Building cyber defenses, CIS control #11: Secure configurations for network devices such as firewalls, routers and switches
by Perry Lynch
3:30 min read
Firewalls, routers, and switches play a critical role in network security. How well they succeed depends on the level of attention administrators pay to their configuration. CIS Control #11 addresses the need to configure network devices carefully and avoid mistakes that could let intruders in.
Remember that it's not just the network perimeter that needs protection! Every switch and access point in the network needs to stay secure. It may take some initial effort to do this but keeping them secure is not too difficult as long as there are procedures in place and they are followed routinely. Software automation can also be used to keep the task manageable.
Most of the measures described in this control can be summarized as always providing accountability for the configuration and maintenance of network devices. It should always be possible for administrators to find out what the device configurations are, what has been changed, by whom, and why. This should be managed as part of a change/configuration management process that is used throughout the enterprise.
Configure all devices securely
Although every network device needs individualized configuration, there is a known pattern to the configuration process, and the default setup in most systems is geared more towards convenience than security. A strong configuration changes the administrative account name, implements two-factor authentication, and disables all unnecessary services. In particular, all command-line access should be via SSH V2, with Telnet disabled. Administrative access to the devices should only be permitted from within the network environment; access from the Internet should be disabled prior to implementation.
A configuration management process should be established and used to record secure configurations for each device. Along with keeping track of the standard secure configuration, this enables network administrators to run periodic comparisons of the current state against the recorded standard to ensure consistency of configs and allow audits against the change management process. Automation tools are valuable for checking all network devices regularly and reporting any discrepancies.
Sometimes it's necessary to make exceptions for specific business purposes, such as allowing a port which isn't normally open. The first step in doing this should be a risk assessment, weighing the loss of security against the need to get something done. When the need for it is over, administrators should revoke it. These temporary changes should be tracked in an open service desk or change management ticket to ensure they are returned to normal and not forgotten.
Keep patches up to date
It may seem obvious that all network devices should have the latest security patches, but the practice can be complicated: patching a router or firewall usually requires at least a little downtime, and there's a risk that it won't come back up properly. Updated devices will also need testing afterwards to make sure their functionality hasn't changed.
Every patch which becomes available should be evaluated for its importance and its impact on the network. It may be safe to skip over one which just improves performance, but a patch which includes serious vulnerability fixes needs to be installed as quickly as is consistent with good management and your organization’s policies.
Automated testing will let the IT department know quickly if there are any problems with the patch. If there are, they can work on fixing the problem or fail over to another device.
Limit administrative access
The control recommends isolating administrative access from normal network usage as much as possible. Ideally, just one machine should handle all administrative tasks. This system should function primarily as a console, with limited domain rights and with Internet access restricted to select vendor support sites if at all possible.
The goal is to limit the opportunities to compromise the admin system. If the only way to change the device settings is from one specific system or subnet, unauthorized attempts will be very difficult to accomplish. Using just one machine also simplifies logging and accountability.
The network ought to be segmented so that other machines can't access the administrative computer. A VLAN within the business network will let the administrative machine communicate with the network devices but not have any direct connection with the business portion of the network. Another approach is to have a separate network interface controller for the admin machine.
by Dwayne Stewart
3:45 min read or Audio
In the event of a security breach of your network, it is likely that the attackers have altered or destroyed important data and security configurations. The tenth CIS control, data recovery capabilities, addresses the importance of backing-up system data and properly protecting those back-ups. By doing so, you ensure the ability of your organization to recover lost or tampered-with data.
Every minute your network is down is productivity lost. Administrators must ensure up-to-date and functioning restoration data has been properly protected using physical safeguards and data encryption - both at rest and in transit. Failure to establish a reliable and secure data recovery solution could mean the difference between a smooth return to standard operations and scrambling to rebuild systems for days, or weeks--just to get back to where you were before the data loss. No one wants that.
A step-by-step breakdown of the proper controls to ensure you can recover your data:
Ensure Regular Automated Backups
A fundamental component in the implementation of an efficient backup process is automation. Humans are prone to err. Beyond mental lapses, we are susceptible to illnesses and mobility-limiting natural disasters to list a small subset of possible contingencies.
Numerous applications are available that can streamline the backup process and achieve data redundancy. Maintaining a redundant set of up-to-date backups at an off-site facility is essential and can help ensure data recovery in most situations. A useful rule-of-thumb is 3-2-1:
Perform Complete System Backups
It is important that a comprehensive backup strategy be implemented. This should allow for the speedy recovery of data, whether it be a few specific files, or an entire server. One useful technique for scheduling system back-ups is the Grandparent-Parent-Child system:
Test Data on Backup Media
All the automation in the world won't save you if your backups are corrupted. The integrity of both your backup system and the system images themselves must be tested regularly.
CIS Control #10 states, "Once per quarter (or whenever new backup equipment is purchased), a testing team should evaluate a random sample of system backups by attempting to restore them on a test bed environment."
Variations of the Grandparent system explained above can also be easily adapted to work here.
Backups could be directed to various locations, such as network-attached storage, removable media, or a cloud-based datastore. The size and budget of your department will directly affect what approaches are feasible for you. It is important to ensure that onsite backup data is not directly accessible by other hosts on the network. Direct access to backup data should be limited to the backup utility used to perform backup and restore activities. Ideally, archived data should be stored offsite and offline with physical safeguards.
The biggest mistake you can make is assuming your organization will not be targeted. Do not assume that because you are not handling government secrets it is alright to leave the removable media holding your backups sitting on your desk. Physical security measures for media containing backup data must be enforced as rigorously as those pertaining to the network. It is also important to ensure that backup data destined for off-site storage is encrypted when saved to removable media.
Ensure Backups Have At Least One Non-Continuously Addressable Destination
More explicitly, CIS control #10 specifically urges that "...all backups have at least one backup destination that is not continuously addressable through operating system calls."
The operating method of hackers is, after gaining a foothold in the system, to enumerate the systems present in your network, slowly mapping its architecture and attempting to escalate privileges across multiple points.
Because of this, it is unsafe to assume that any backup data accessible through your network is ultimately safe. As mentioned in the 3-2-1 method, and explicitly urged in CIS Control #10, at least one back-up should be located offline and preferably offsite.
The most important ideas to remember when designing your backup systems are
Addressing each of the above items will help to ensure the safety and recoverability of your network systems and company data.
by Andrea Lee Taylor
1:45 min read or Audio
Every once in a while in the annals of cybersecurity there is news that isn’t a warning about the newest breach or the release of the latest patch. In this case the news is good for Maryland buyers of cybersecurity.
The General Assembly of Maryland, on April 9th, passed the Cybersecurity Investment Incentive Tax Credit Bill (SB 228). It provides for “…authorizing certain buyers of certain technology to claim a credit against the State income tax for certain costs; providing that the credit may not exceed certain amounts under certain circumstances; requiring the Secretary of Commerce to approve each application that qualifies for a credit…For any taxable year, the credit allowed…may not exceed $50,000 for each qualified buyer.” (LegiScan)
The cyber incentive bill is unique in its agency and platform. Simply restated, it provides for a credit for buyers of cybersecurity services and products from Maryland companies. “This is a first-in-the-nation legislation and we’re looking forward to some really great successes,” said Senator Guy Guzzone (D), primary sponsor of the bill. Cosponsoring were Senators Adelaide Eckardt (R), George Edwards and Andrew Serafini (R).
Cybersecurity it this century’s absolute fact of life. For any business, coupled with the necessity for security are the budget parameters available to fund a flexible, strategic cyber plan. Any financial assistance in obtaining services or products is a welcome support and boost to doing business.
Qualified buyers may claim a credit on their state income tax up to 50% of the cost of the technology or service purchased from qualified sellers. As a qualified seller, we are excited to be able to share in this opportunity.
“Our focus has been strictly cybersecurity for over 16 years now and this legislation is a first and is a great help to businesses. Anchor looks forward to putting our experience to use helping small businesses improve their security posture,” said Anchor Technologies’ CEO, Peter Dietrich.
Cybersecurity is a necessity. A plan for what to implement and when keeps businesses on track in protecting their important data. Knowing one does not having to worry about whether the company’s data is as secure as possible allows owners to concentrate their efforts of conducting and growing a business. Thank you to the state legislators for helping to empower small business in Maryland.
building cyber defenses, cis control #9: limitation control of network ports, protocols and services
by Marian Bodunrin
4:00 min read or Audio
Transmitting and receiving data via network ports is a necessary evil. Because your network process uses a specific port to communicate to another port there is no avoiding the inherent risk. The most perilous services on a network are the ones you don't know are running. Default system installations often activate services with little or no useful purpose and often go unnoticed. "Shadow IT" operations may start up unauthorized, poorly secured services.
There are 65,535 TCP ports and 65,535 UDP ports. Some of them are more vulnerable than others. For example, TCP port 21 connects FTP servers to the internet but have several vulnerabilities, such as cleartext authentication, which make it easy for an attacker with a packet sniffer to view usernames and passwords. Telnet on TCP port 23 sends data in cleartext which makes it vulnerable to attackers listening in to intercept user’s credentials, and man-in-the-middle attacks. Also, the busiest ports are the easiest for attackers to infiltrate. TCP port 80 for HTTP supports web traffic. Attacks on web clients that use port 80 include SQL injections, cross-site request forgeries, cross-site scripting and buffer overruns.
A well-run, secure network does not expose any service without a reason. The issue arises if no one notices the services that are running, no one may be monitoring them or keeping them up to date. CIS Control #9 addresses the Limitation and Control of Network Ports, Protocols and Services, and gives specific recommendations for avoiding the risk of unmanaged services and ports.
System administrators need an established baseline of what ports and services are supposed to be running on each machine. In addition, they need to run regular, automated port scans. Simple, free software is available that will do the job. The scan should note any differences from the baseline and notify the administrators.
The first time a scan is run, it's likely IT administrators will discover previously unaccounted for or undesirable services, possibly due to oversight. These services should be tracked and disabled upon discovery. Most importantly, perform a periodic performance of port scans on a regular basis to determine which services are listening on the network, which ports are open, and to identify the version of the protocol and service listening on each open port. All such efforts will further reduce the attack vector.
Every software installation carries some risk. It could open up unmanaged ports by default, just because they might be useful in certain cases.
When installing new software, the best practice is to identify any added services and configure it to run only the ones that have value for business operation. Running a port scan before and after installation will verify if any others were added--and all legitimate services should be securely configured.
For an organization to adequately mitigate risks, a layered perimeter of defenses such as application-aware firewalls, network access controls (NAC), intrusion detection/prevention systems should be deployed to avert unauthorized access. “Defense in depth” is the watchword of a good security setup.
Use of endpoint firewalls, removal of all unnecessary services and segmenting critical services across systems, and applying patches as soon as they become available, will reduce your organization’s risk exposure. For instance, a network scan can identify all servers which are visible from the internet--if any don't need to be visible, moving them to an internal VLAN will keep them safe. If they run any unauthorized services that aren't caught, at least they won't be directly reachable from outside.
Running multiple critical services on the same machine is an invitation to trouble. If the same machine runs DHCP, SMTP and HTTP, an attacker that breaches one could jump to the others. Each of those services should have its own virtual or physical machine, with just the ports needed to run it.
It's easy enough to install multiple virtual machines on one computer. That way, each port's services have their own operating system, root file directory, and network settings. If one of them gets compromised, the problem is more likely to stay localized long enough to identify it and fix it.
Minimize to Maximize
Just as building management needs to know every door through which people can enter and how that door is secured against sneaking in undetected, so IT management needs to know every port and service which the servers expose. If they're there for a reason, they should be managed and secured. If there's no reason for them, they could be an unguarded back door to the network, which should be shut and locked. Though it is impossible to eradicate all risk, exposure can be greatly minimized when appropriate controls are put in place to deter an attacker. Implementing CSC #9 will further mature the cybersecurity posture of your organization and a continuous monitoring tool deployed to serve as an ongoing exercise will contribute to the effort of reducing risk and maximizing cybersecurity.
by Marian Bodunrin
4:30 min read or Audio
Malware is a type of computer program designed to infect a legitimate user’s computer with the intent to inflict harm. Malware comes in various forms such as, viruses, Trojans, spyware, worms, etc. Malware is a huge and growing problem, costing businesses millions of dollars and typically exposes or damages vital data. New forms constantly appear and can be hard to catch. CIS Control #8 addresses recommendations that should be implemented to reduce an organization’s risk.
The degree of damage caused by malware varies according to the type of malware, the type of device that is infected and the nature of the data that is stored or transmitted by the device. As a result, defense strategy needs to act on multiple levels. Defenses need to prevent malware from being installed, from running if it is installed, and from spreading if it runs. This is defense-in-depth and requires a strong set of automated tools.
Automated malware detection and removal software is an absolute requirement. It needs to cover everything on the network: servers, workstations, mobile devices, and anything else that has a processor and runs code. Regular updates are necessary to keep up with new threats, and machines should be checked to make sure they're getting the updates. Also, periodic vulnerability scans, along with malware detection and blocking should prevent a network from being compromised and succumbing to a botnet.
Shadow IT increases risk. If people are running machines that aren't authorized, they aren't going to be consistently monitored and protected. The first and second CIS controls stress the importance of keeping track of everything on a network, and malware protection is one of the reasons that makes such inventories so important.
It isn't enough to put protective software on each machine without an overall plan. Defenses are very hard to manage if haphazardly installed. Each machine would need its own updates, and hostile code that gets blocked on one system could get through on another. Centrally administered and automated protection gives your network a more consistent defense.
Keeping track of what protective software finds is important. It should be set up to log all incidents, and part of administrators’ responsibilities is to review the logs. If an issue turns up on one machine, it may be present elsewhere as well. If an attack occurs repeatedly, it's time to check the defenses against it and strengthen them as necessary.
Network monitoring needs to check for traffic that could indicate malware. The most popular malware model today is the Command & Control (C&C), where it reports to a server, sends information, and gets instructions. The monitoring system should log DNS queries in order to catch requests to C&C domains. Effective firewalls can capture suspicious file transfers and block hostile traffic. This isn't limited to blocking ports and IP addresses; the best software can catch malicious packets at the application level, after SSL decryption.
If a device is caught running malware, the network protection software should quarantine it immediately. Keeping malware from spreading buys time to fix the problem in spite of its urgency.
Limiting the attack surface
External devices, such as thumb drives, are inherently convenient and yet they create risks. Many are too trusting of drives received as promotional giveaways, even legitimate ones are sometimes inadvertently infected. Auto-running when devices are inserted is a convenient feature that ought to be buried, and this feature should be disabled on all machines. Thumb drives are the most common, but the caution applies to all mountable devices brought in from the outside.
A solid defense will have anti-malware software scan for each newly mounted device. If there are suspicious files on it, the scan will automatically dismount it. Newly downloaded files need the same consideration. Each one should be scanned, and the ones that are flagged should be blocked from running.
The multi-layered approach
It's unrealistic to expect any defense to stop all malware at the perimeter. There are just too many threats, new ones being invented and unleashed all the time, and some will make it past the first line of defense. Stopping threats requires a coordinated effort in the firewall, devices on the network edge, server protection, and monitoring.
The multi-layered approach is to:
Everyone understands that malware protection is necessary but turning it into a systematic set of practices takes a coordinated effort. Everyone involved needs to be working on the same comprehensive cybersecurity plan.
by Marian Bodunrin
3:45 min read | Audio
Web browsers and email clients are very common points of entry for malicious code due to their daily usage by users. Content can be manipulated to entice users into taking actions that can greatly increase risk resulting in loss of data and other attacks. Controlling the use of browsers and having a defined list is critical. The CIS’ Control #7 addresses several key points in protecting an organization’s environment as well as provides recommendations to mitigate risks. While some of the controls may seem too restrictive for an organization's needs, most are clearly necessary and implementing them will ensure a more robust cybersecurity blueprint.
An organization’s browser, the portal to the internet, is also the first line of defense against malware threats. Minimizing attack vectors should be the number one goal-- ensuring only fully supported web browsers are allowed to execute and deploy updates. Obviously, as much as possible, updates should happen as soon as they become available, and a formal written policy should be developed addressing user behavior.
At times it can be difficult to control the sites users access. Enforcing a network-based URL filter that limits the system’s ability to connect to websites not approved by the organization will help to monitor this vulnerability.
Keep in mind that if vulnerabilities within the browser are not available, attackers also target common web browser plugins that may allow them to hook into the browser or directly into the operating system. To mitigate this risk, uninstall or disable any unauthorized browser plugins or add-on applications.
An e-mail security program needs to provide confidentiality, data origin authentication, message integrity, and nonrepudiation of origin. CSC # 7 provides several recommendations to help ensure email security. Using a spam filtering tool will aid in reducing malicious emails that come into the network. Deploying a Domain-based Message Authentication, Reporting and Conformance (DMARC) protocol will ensure that legitimate email is properly authenticated against established SPF (Sender Policy Framework) standards. Fraudulent activity appearing to come from the organization’s domains are blocked. Installing an encryption tool to secure email and communication adds another layer of security for users and the network.
Spoofed messages are dangerous because they can create a false sense of trust. Employees are more likely to respond to a message that seems to come from someone they know. The SPF standards guard against this by checking if messages are coming from a mail server that is authorized to use the sender's address. While the CIS specifically recommends SPF, other protocols such as DKIM work well with it, and implementing both is advisable.
Implementing this control should be neither very disruptive nor very difficult. In a security-focused organization, end users are typically not allowed to install their own software, and updates are deployed as soon as they are available by the authorized department. Software needs to be kept up to date in general, and Web browsers and mail clients will be part of this practice. Administrators should also restrict and monitor the use of plugins. At times there might be special work requirements that involve a plugin--such requests should go through the administrator for approval.
The simple rule to follow when implementing this control is, “Make it simple for the users or they will find a way around it.” Increasing complexity or the effort users have to put in often leads to privilege misuse or other methods to defeat the controls. It is worth mentioning that human error is still the major cause of most breaches and incidents. Overall, implementing this control provides a large improvement in safety for relatively little effort.
by Marian Bodunrin
2:30 min read
When properly implemented, Control #6 can bring an organization’s security program to a higher level of maturity. Maintaining, monitoring and analyzing audit logs helps gain visibility into the actual workings of an environment. Also, with proper implementation, the control can help detect, understand or recover from an attack.
Despite best practices, it is impossible to safeguard a network against every attack. Therefore, when a breach occurs the log data can be crucial for identifying the cause of the breach and help in collecting evidence for use. That is, if the logs were configured properly before the incident occurred.
Deficiencies in security logging and analysis allow attackers to hide their location, malicious codes and activities on victim’s machines. Without protected and complete logging records an organization is blind to the details of an attack which can go on indefinitely and cause significant damage.
To ensure readiness, and effective log maintenance, monitoring, and analysis, the Center for Internet Security (CIS) recommends the following controls:
Maintaining security logs and actively using them to monitor security related activities within the environment is essential, especially during post breach forensic investigation. Therefore, an organization must develop procedures to actively review and analyze logs in real time so that attacks can be detected quickly with appropriate response time. It's one of several best practices for an environment to achieve a safer, better, cybersecurity posture.
by Andrea Lee Taylor
We have considered individually the Center for Internet Security’s top 5 controls for effective cyber defense. Together, they are a force. Perhaps you’re already aware of CIS’s statistic. Of the 20 controls, to implement just the top 5 reduces known cybersecurity vulnerabilities by 85%. If I got that kind of return from the stock market I’d be retiring. Next week.
And it’s not that the recommended set of actions are impossible to implement--far from it! A shift in focus may be required, but we find most employees, most board members, are amenable. To implement procedures and processes means people may be inconvenienced, even personally so. But more often than not they are open to adopting and adapting when it is for the overall good, even the good of the organization.
When people are educated as to what is important, why it is important and, more importantly, how they can help—it’s been our experience they are more willing to be a part of what is being asked rather than a speed bump to greater security.
CIS has a resource that is not news; neither are the controls. Updated periodically, you can download the latest CIS Controls (V7) and read a white paper Practical Guidance for Implementing the Critical Security Controls (V6). It is a way and a place to start. The return on investment is in strengthened cyber defenses and protection, streamlined administrative security functioning and ultimately a savings in financial resources. That is not to say that this isn’t an ongoing work without financial backing. It is. But job security and interesting challenges are important, and being one breach away from exigency is no way to live or conduct business.
Someday the CIS Controls advice will not be revolutionary in its results because it will be boringly customary. Yet the controls have not been implemented to such an extent as to render their advice moot or their results less than stunning.
They’re that worth implementing.
Update: V7 of the Controls adds Control #6 to the basic list of controls. Their approach is always one that keeps an eye on the current threat landscape as well as the latest tools developed in cyber defense. And still, the essential remains the same--making sure the basics are covered makes an exponential difference in an organization's security stability.