by Dwayne Stewart
3:35 min read | Audio
Implementing all the CIS controls won’t guarantee there will never be a successful attack on your systems. Sooner or later, someone could penetrate defenses and access confidential information or deposit malware. To be prepared, you need an incident response plan, the focus of CIS Control #19.
Have a plan in place
Figuring out how to manage incidents as they occur is bad practice in general and ultimately not in the best interests of your organization. Speed is of the essence, and to minimize service disruptions, a plan should be in place and people prepared to execute that plan.
Steps to take in response to an incident should be delineated. A list would include something similar to this:
Each step will include detailed instructions on what to do in the various situations that may occur. The more thorough the plan, the more efficient the response.
Assign duties and roles
To carry out a plan effectively, an incident response team should be created, consisting of staff that are familiar with the plan. They understand what's expected of them and who will be making decisions. Each team member’s role should coincide with their position within the organization.
The decision-making authority needs to be clear. This allows for prompt response to a discovered breach, as opposed to lengthy discussion about who should do what. Emergencies are generally unpredictable, so one or more levels of backup authority are needed. The less time it takes to find someone who can initiate and direct action, the easier it is to mitigate the issue.
The response team needs to have the skills and training to deal with situations under pressure. There are far more kinds of attacks than any person can be familiar with, so familiarity with mitigation tools and processes, as well as good problem-solving skills, are important.
Establish reporting procedures
A quick response requires getting information to the right people quickly. Both software and people play a role. The proper tools need to be in place that provide network visibility and appropriate notification of malicious or anomalous network activity. For example, intrusion detection and endpoint protection software should issue alerts when suspicious activity is detected. Logs from all network infrastructure devices and network security controls should be collected and analyzed by a SIEM or other log management utility. This provides a single pane of glass that should allow logs to be used in providing a clear, concise picture of what has occurred.
In addition, employees need to know how to submit a report if they see something unusual that could be an indication of a compromised machine. There should be report forms so that people will provide as much useful information as possible. They should have entries for the system affected, the symptoms, the date and time, and the actions taken before and after noticing the incident.
Contact information needs to be included in the incident response plan so that members of the response team can be contacted immediately once an incident is confirmed. The information needs to be up-to-date in order to avoid delays.
There can be very long periods between security incidents, and members of the incident response team can't afford to forget how to perform their duties. They need to perform well under stress, even if they don't do it often. Periodic exercises will help them to remember what they need to do and to avoid confusion and prevent simple mistakes when responding to an incident. They'll also help to make sure all the necessary information is still valid. Something as simple as an outdated phone number can seriously slow remedial action.
"Be prepared," is famously the Boy Scout motto. That state of readiness applies to intrusions and breaches, too. The response needs to be planned, organized, and smoothly handled. A well-secured site will encounter few such situations, but it takes only one to cause major data loss, financial damage, and loss of trust. Properly managing security incidents once discovered can help minimize their impact and ensure an organization's continued survival.
by Perry Lynch
3:35 min read | Audio
Flaws in software can leave your information systems vulnerable to attacks. Information about bugs in popular commercial and open-source software is available to everyone. Attackers exploit them once they're known, so keeping up with patch releases is essential to security. Applications that are developed in-house and commercially obscure applications aren't subject to the same broad scrutiny, but they still may have potentially vulnerable coding flaws to be dealt with. CIS Control #18 addresses the methods of keeping applications secure, whether they're acquired or internally developed.
When it comes to vulnerable applications, the source doesn’t matter. When dealing with applications from outside sources, whether commercial or free, the most important consideration is to stay with a supported version and apply all security patches in a timely manner. This doesn't necessarily mean having the latest version! When available, using the long-term support (LTS) version of an application can be less disruptive than updating to each new version. However, regular bug fixes and security patches still need to be applied quickly, and you should have an upgrade plan ready when the newer version is released.
In many cases, by the time a security patch is released, the vulnerabilities that it addresses are publicly known. Publicly-available exploits for the code may already exist or be easily discovered by reverse-engineering the change. Criminals will figure out how to take advantage of it if they haven't already. There's no escaping the need or the urgency required to fix these vulnerabilities.
Developers need to design security in from the start, not add it on after they have working code. Unfortunately, this remains a recommended practice and not a common one, so organizations that develop their own applications may have a lot more work to do.
There's pressure to get the code working on time, which sometimes results in security considerations being pushed to the background. Yielding to the pressure will only cause trouble. It's harder to go back and catch every vulnerability than it is to stick with security-oriented development practices that minimize the chances of exploitable bugs.
The practices which bake security in from the start include these:
Considerations for all applications
Development and QA work should be done on development systems that don't have access to production data. Developer accounts shouldn't give access to production systems; that way no one can work on live data by mistake.
Developers can still miss bugs, and configuration errors can introduce vulnerabilities. To maintain the safety of your applications and data, organizations should always test new implementations of the applications they use with automated scanning software. Being the first to discover a bug is better than living with an undiscovered vulnerability.
Code should go to production only after being thoroughly tested. A full, automated test needs to be run before each release, not just on the changes. Bugs in earlier versions have a way of creeping back, and seemingly inconsequential changes can introduce problems.
If any new or previously undocumented vulnerabilities turn up, they should be carefully documented so that others can replicate them. A confidential report should then go to the software's maintainer. Until it's fixed, measures should go into place to prevent exploitation of the bug. These could include input filters or a configuration setting to avoid the dangerous use case.
Application firewalls are another important level of protection against unknown or unpatched bugs. They provide a level of protection against the exploits of known bugs and common attack patterns, such as syntactically incorrect requests.
OWASP has some excellent resources for developing secure applications.
The stakes are high
A previously undiscovered bug can turn into an active threat without warning. Zero-day exploits take advantage of these to steal vast amounts of data or gain control of computers. Criminals or malicious attackers will have information about these vulnerabilities before you do.
Badly designed and out-of-date applications are especially at risk. Protective measures need to cover purchased, free, and in-house software. They need to guard not only against known issues but against ones that are still unknown. A multilayered approach consisting of good design, maintenance, and application-level protection against malicious traffic will provide the best protection.
by Perry Lynch
3:20 min read | Audio
Security Awareness Training is one of the most cost-effective ways to improve your organization’s overall security posture. Most breaches are at least partly due to human error, and while nothing can be done to completely eliminate errors, a good training program will reduce greatly the potential for security-related mistakes. CIS Control #17 covers the basics of a reliable program and what a good one should do.
The starting point is a skills gap analysis. What skills does each person need to stay clear of dangers, and where do they fall short? Everyone needs to understand the basic points such as setting strong passwords and being wary of spam. People with access to sensitive systems need to be and stay aware of subtler points--such as personally targeted phishing and inappropriate information sharing.
Security programs need to identify and focus on the areas of greatest risk. This applies as much to training as to network configuration or software updates. In this case, focused training is crucial to strengthen areas where technical solutions alone are not enough.
Anyone with access to a network can make a mistake and create problems. So along with training, additional technical measures need to be implemented. The organization should follow the recommendations of Control #14 and only grant users limited access to those rights required to perform their jobs. Those with higher levels of access need to be especially alert.
Any training program needs to produce measurable results; otherwise there's no way to measure a program’s effectiveness. There should be a focus on specific goals and closing the skills gap. Training should target the most serious risks associated with each individual's role. One example--people with root access should learn how to protect those credentials and to minimize their use of root accounts.
Security awareness is the understanding of methods which would-be intruders use to deceive people. These techniques keep changing, so users need periodic updates. All employees should study security awareness materials, and management needs to confirm this is accomplished. Senior management has to be included in the training. They continue to be the favored targets of personalized deceptions because they are more likely to have access to sensitive information as well as financial accounts.
Regulatory changes, such as GDPR, may require a new set of priorities. Changes in the information an organization handles may shift the greatest areas of risk. Any awareness training program needs to be able to adapt to properly communicate the risks involved.
Mentoring by more experienced users is often an effective approach. They know better than anyone else where the risks are, and hopefully they have good rapport with the people they're training.
Social engineering exercises should be conducted periodically to assess the current level of users’ awareness, reinforce recent training activities, and to make sure people don't fall back into carelessness.
The most important consideration of security awareness training isn't that people give the right answers on a quiz--the goal is to help them instill habits that prevent errors and do the right things in practice consistently.
An ongoing program
The most effective awareness programs include routinely-communicated messages from the security team. A regular cycle of policy reminders, educational messages, risk warnings, and messages about current security news will easily keep your staff more involved and alert to potential threats.
You can ensure that users are paying attention by including occasional security advice that can assist them personally, and by occasionally naming the front-line heroes--those who have first recognized a security threat and promptly reported it to the help desk and/or security team.
With ongoing security education, people will learn to avoid the mistakes that can lead to disaster. People will always make some mistakes, but ingrained security habits will prevent the most common and most serious ones.
by Perry Lynch
3:30 min read | Audio
Account hijacking lets criminals impersonate employees and contractors. They can trick others into getting information and gain access to systems. It's especially dangerous when they get control of inactive accounts, since they might escape detection for a long time. CIS Control #16 presents ways of preventing account theft and detecting it if it does happen.
How to gain control of accounts
Phishing, brute-force password guessing, and gaining physical access to unattended workstations are some of the ways a would-be invader can steal user credentials. Some users make it easy for the attackers by using common passwords or writing them down where visitors can see them. If a user has a mobile device that logs in automatically, someone who steals it can get into the accounts without further effort.
If the attacker can successfully impersonate the victim by sending and receiving emails from a spoofed account, they may be able to gain access to other accounts by requesting a link to reset their passwords. This is most effective when no one else is currently using the account. Otherwise the account owner may notice the emailed link and suspect something is wrong.
A successful impersonator can email other users and convince them to send confidential information or arrange wire transfers. It could be a while before anyone recognizes the impersonation.
Managing account lifecycles
Deactivating stale accounts reduces the opportunities for impersonation. It also protects against actions taken by disgruntled ex-employees or contractors who might take illegal advantage of their continuing access. A process should be implemented to disable accounts when employees are terminated or contractors complete their current tasks.
Activity monitoring can catch any accounts that have slipped through the cracks and gone dormant without being closed. A well-structured monitoring system can also detect spurious logins at times when the user wouldn't normally be working, as well as attempts to log into deactivated accounts.
Preventing account theft
Every hijacking method warrants its own type of defense. Password theft can be thwarted with a requirement for strong passwords (CIS recommends 14 characters or more). Two-factor authentication will make it harder to use stolen passwords. All authentication should, of course, use encrypted protocols.
Although CIS no longer recommends frequent password changes as a method of protection, it’s still a good idea to change them on a regular basis. Consider that the most effective way to meet a password length requirement is to exceed it: Use passphrases that are complete with punctuation. These can be easily remembered, which reduces the odds that users will write them down or that attackers will decipher them.
Password files need to be encrypted or hashed and be accessible only to administrators. Although current operating systems use password hashing and protected databases, there are other avenues: Many departments keep a file of account credentials in a shared folder or network drive. These should be migrated to trusted credential management platforms, using current encryption and authentication methods to ensure that only authorized users can access them.
Having accounts automatically log out after a period of inactivity reduces the chance for anyone to walk up to an unattended computer and use it. Alternatively, the system can require re-entry of the password after a short time and then let the user continue the same session.
Detecting hijacked accounts
This requires logging of account activity and analyzing it. Inspecting the log for an unusual number of failed logins, or off-hours activity, is an option available to all system managers.
Staying on top of all the accounts that an organization issues keeps opportunists from taking control of them. Tools for centralized account management help in implementing this. Keeping the list of active accounts winnowed down to the ones that are currently in use means fewer accounts to attack and fewer that can be taken over without being noticed. With ongoing monitoring of account usage, would-be intruders won't find available as many opportunities to pillage.
by Perry Lynch
3:45 min read | Audio
Wireless access presents a special challenge for network security. A weak security implementation will allow intruders to gain an almost physical level of access; they may be able to bypass your firewall and directly connect to your information systems from locations that are within range of your facilities. CIS Control #15, "Wireless Access Control," provides guidance to minimize this risk.
The risk factors
Unmanaged wireless devices in the hands of trusted users present a significant risk: They provide access to information for trusted users, and are sometimes considered to be part of the network. However, they are not consistently managed or maintained, and are routinely exposed to malware and opportunities for corruption when they are not on your protected enterprise network.
To counter these risks, the access point should be considered as much a policy enforcement tool as it is a network gateway. Your network of access points should be maintained at current patch levels and at the highest possible encryption levels and configured to provide secured access to the enterprise network for corporately-owned devices. Guest devices, either staff or visitor-owned, should be restricted to a network segment or VLAN that provides access to the Internet only. To further limit risk, access points should also be configured to prevent ad-hoc wireless networking and direct client-to-client access within the Wireless LAN.
Configuring access points
The 802.11 security standard continue to evolve, with the launch of the WPA3 in the 2nd quarter of this year. The older security protocols, WEP and WPA, have known serious weaknesses and should no longer be used. The TKIP encryption protocol has been deprecated as well. The CIS recommendation is to use WPA2 with AES encryption. AES is the default when using WPA2 on modern devices.
Access point firmware needs to stay up to date. The KRACK vulnerability, discovered in 2017, affected virtually all WPA2 implementations. Manufacturers have issued firmware updates to address this issue; implementing these patches is necessary to maintain security.
If you are planning a future Wi-Fi implementation or upgrade, remember that vendors are submitting device designs for certification to the new protocol, with plans to fully support WPA3 in 2019. Make sure your hardware vendor will support a future-proof implementation to get the most from your investment.
Rogue access points
Unauthorized wireless access points can present a serious risk and should be removed from the network whenever they are discovered. Regardless of intent or configuration, they provide unauthorized and/or unprotected access to the network. Left unsecured, they could provide an unencrypted open access channel into your information assets.
Monitoring software that works from an inventory of authorized systems can recognize any unauthorized devices. This makes it possible to block the offending device from the network, then locate and disconnect it.
Many of the available managed access point solutions include Wireless Intrusion Detection Systems (WIDS) capabilities, providing the ability to detect and disable unauthorized access points or the use of various wireless attack tools.
Limiting other devices
Printers and other devices often include their own wireless access as a convenience feature. In a corporate environment, this should be disabled to prevent the printer from becoming an undocumented entry point to the network.
The use of Bluetooth in the environment is an often-overlooked concern: enabling unregulated pairings may permit intruders to gain direct access to computers on the network. Restricting Bluetooth-based services to only support headsets and input devices is easily handled with group policy and should be implemented whenever the environment contains Bluetooth-capable systems.
Limiting less trusted access
BYOD policies are useful but allowing personal devices to have unrestricted access to the same network your information systems rely on is never a great idea. Even with the most restrictive policies, the IT department doesn't have full administrative control over devices not owned by the organization. A reasonable compromise is to provide access to a guest VLAN, implement restrictive ACLS between it and the enterprise network, and permit out-bound only Internet access on that VLAN.
In any event, only wireless devices that are owned by the organization should be permitted on the enterprise network. This provides the IT staff with the authority to enforce adequate security restrictions for those devices.
Wireless networks provide value and convenience, but they require care and attention to avoid becoming a security problem. Facilities containing highly sensitive information assets should consider using it for guest access only or avoid using it at all. Enterprise networks that do use it need to employ the latest protocols, restrict its use to authorized devices, and be on the lookout for unauthorized access points.
by Perry Lynch
2:45 min read | Audio
The fewer ways there are to reach information, the less risk there is of unauthorized access. This is the point of CIS Control #14, "Controlled Access Based on the Need to Know." This is closely related to Control #13 "Data Protection," but focuses on the access allowed. The specific controls have some overlap, especially regarding encryption and logging. What is distinctive to this control is the emphasis on access control and network architecture.
Identify the Data
Data should be identified and automatically labeled or tagged based on the existing data classification requirements for your enterprise. This can be done using one of several active discovery tools that can investigate the network file shares and desktops to flag documents and folders that match the classification criteria. Upon identification, sensitive files can be relocated into the appropriate data file shares, ensuring that access rights and group policy are easier to maintain and govern.
Isolate the Data
Implementing VLANs for critical servers is a straightforward way to reduce the risk of compromise. Along with servers, VLANs should be configured to support other critical business functions. Micro segmentation should also be enabled, which restricts a user’s ability to directly connect between workstations on the network.
Implementing firewalls or ACLs between each VLAN will ensure that only authorized systems and protocols are permitted to communicate with each other and will significantly reduce the risk of unauthorized data exposure and/or the unchecked spread of malware within the enterprise.
Encrypt the Data
Implementing data encryption ensures that data compromise efforts are increased significantly. Encrypting data at rest for laptops, workstations in insecure environments, and servers containing sensitive data will mitigate against the risk of data compromise.
A mobile device management solution should be implemented for all corporate and user-provided mobile devices that will be permitted to access this data.
Encryption for data in transit should also be implemented for all methods: Transport Layer Security (TLS) should be required for all outbound email communications and for all web-based portals and user interfaces. Command Line access to management interfaces should be through SSH as well.
This mitigation strategy can be further strengthened by taking proper care to use a centralized key management system and to ensure that encryption algorithms and key sizes are reviewed and updated annually.
Protect the Data
Access to the systems containing sensitive data on the server VLAN should be restricted to specific groups of workstations within the network; file systems and database servers should also be restricted to specific groups of users.
User accounts should be configured with specific access rights based on their role within the organization. Administrative users should have two accounts, one with restricted access for normal work activities, and a separate admin-level account for any systems maintenance responsibilities.
Along with these controls, Data Loss Prevention should be implemented as a means of identifying and/or preventing the unauthorized exfiltration of data via USB, email, or web-based communications. DLP solutions typically rely on either common keywords or analysis of predefined data to identify, enforce, and report on policy violations.
Any system or account on the network carries some risk of being compromised. Any account or system with access to confidential data should be limited in order to reduce the chance of successful unauthorized access. Restricting access to critical resources and limiting the access rights of authorized systems and accounts will enable IT personnel to focus on detecting and preventing a smaller range of potential attacks.
by Perry Lynch
4:00 min read | Audio
Everything in systems security ultimately is about protecting data. CIS Control #13, deals with data protection in its most direct sense. The main issues are identifying sensitive data, preventing its unauthorized transfer, detecting any such transfers, and making improperly acquired data as difficult to use as possible.
Identifying critical data
The first step is to identify the data that needs protection. Organizations generally have their data spread over multiple systems with varying levels of security. However, you can successfully protect this data through the use of several tools and techniques: Access control, encryption, integrity protection, and data loss prevention can be used together to identify, restrict, and protect any sensitive or mission-critical data.
A data classification process should be undertaken. Once data is properly classified and labeled as regulated, sensitive, confidential, or public, those files and folders should then be migrated to properly identified folders on the SAN, and group policy should be applied to ensure that access is limited to authorized staff members.
Databases and files with sensitive data should be kept on machines which aren't exposed to outside connections. Access to them should also be restricted to authorized users on the internal network as well, in a manner that’s consistent with business requirements.
Once sensitive data is adequately secured, routine network hygiene needs to take place: Many users will maintain bad habits and keep unsecured copies of sensitive data because it's convenient. Administrators should routinely use appropriate tools to scan desktops and non-secured folders on the SAN for cleartext that looks like sensitive data and alert the appropriate data owners.
Protection by (and from) encryption
Laptops and mobile devices are easily stolen, so if they hold any sensitive information, the entire device needs encryption. Mobile Device Management tools can be used to secure sensitive corporate data for corporate and user-owned phones and smart devices, without impeding the end user’s personal use of the device. Full Disk Encryption should be deployed for all corporate laptops, using a centralized key management system. This will prevent unauthorized users from being able to access the device and any data should the laptop become lost or stolen.
Within the enterprise, encryption is often required in databases and other systems on the network. Many databases contain sensitive fields that require encryption or hashing, independently of the whether the disk is encrypted. Other systems may require the entire database be encrypted.
Methods of encryption need periodic review. Some algorithms that were once considered strong, such as SHA-1, are now deprecated because of their weaknesses. Any data encrypted using them needs migration to a better algorithm.
Encryption is valuable, but it's a problem when it isn't supposed to be happening. If outgoing encrypted traffic is originating from unauthorized desktops, it could be evidence of malware sneaking the data out. Network monitoring software can detect and flag the use of SSH and other secure protocols outside of expected contexts. If they don't have a legitimate purpose, administrators need to track down their source and remove any malware responsible.
Encrypted exfiltration can also tunnel through harmless-looking packets, such as DNS requests. These are harder to detect, but application-level monitoring software can often identify them by characteristics like abnormally long data fields.
Monitoring data movement
Network monitoring can generally recognize dubious packets. These packets could be included in otherwise legitimate traffic, such as an email that carries sensitive information in cleartext. It could indicate malware is at work, but it might also indicate that users are making otherwise legitimate transfers in an insecure way.
This falls into the area of data loss prevention (DLP). Software systems for DLP take a variety of approaches for recognizing abnormal traffic. Most rely on pattern detection, so human verification is generally necessary. Other systems rely on fingerprinting previously-identified data and will operate effectively with a lower level of human intervention. In either case, the software needs to be configured so that the number of false positives is reasonably low and, all alerts will get the attention they need.
Known hostile IP addresses should be blocked and monitored, as attempts to reach them could indicate that malware is trying to send out sensitive data; other destinations could be attempted if the first one is unreachable.
Transferring data within the network is sometimes a concern. Copying sensitive information to mobile phones or portable storage devices increases the risk. It may be a good idea to configure machines to prevent those transfers.
A large part of data protection is simply knowing where the information is and where it's going. Keeping track of all sensitive data storage and limiting its movement are essential practices, and accomplishing that requires safe network configurations, monitoring of traffic, encryption of data, and prompt action when problems arise. Protection needs to be multi-layered, especially when leaks would cause serious harm.
by Perry Lynch
3:30 min read | Audio
Defending network boundaries is an increasingly complicated and difficult task. Cloud services, remote access, and mobile devices can make it difficult to identify the exact boundaries of a network. CIS Control #12, which deals with the defense of network boundaries, is correspondingly complex. It pays to remember that boundary protection isn't just a matter of securing the front lines, it’s also a major component in a layered defense strategy.
Managing the task
Securing the boundaries means paying attention to new threats and attack methods and evaluating them against the needs of the business. Achieving a balance between effective security and user needs will require frequent risk analysis and constant communication with upper management. By doing so you will enable enforcement of an effective and realistic security plan that supports the business needs of your network.
A well-structured network architecture includes not just a DMZ for the limited number of Internet-facing systems, but also specific security zones for internal servers, systems management workstations, and other business-critical systems or applications.
Network scanning is necessary to make sure no one attempts an end run around the proxy. These might come from malware or from impatient users trying to circumvent the rules. Unauthorized VPN connections might send encrypted traffic through the proxy and present a security risk even if its purpose is relatively innocent.
Decryption of network traffic should take place at the proxy level. That lets it apply application-level security on top of IP and port filtering. The proxy will use whitelisting or blacklisting to prevent connections to malicious servers. Whitelisting is safer, but it's difficult to maintain a complete list of approved domains and IP addresses without constantly adding to it. Blacklisting requires constant updating from services that list rogue addresses.
Both inbound and outbound traffic needs filtering. Only ports and protocols that are considered mission-critical should be permitted outbound through the firewall. Additionally, blocking access to known malicious domains will defeat many phishing attempts. If malware can't reach a command and control server, it becomes far less effective, and easier to eliminate.
Intrusion prevention and detection
Preventing unauthorized activities and catching them as they happen are crucial to boundary protection. The Intrusion Detection/Prevention Systems (IDS/IPS) should be configured to alert and/or stop a majority of attempts by catching suspicious traffic. Signature-based detection is the traditional approach, but sandboxing and other methods can be considered as supplemental tools to detect zero-day attacks.
Monitoring should record the headers of any suspicious packets, if not the whole packet. This information is valuable for event monitoring, so that the source of the problem (external or internal) can be identified. Analytics run on this information can turn up patterns that are too subtle to detect from a small sample.
Malicious traffic can piggyback on all kinds of protocols to escape notice. For instance, if large numbers of senseless DNS requests are being sent out, they may cloak communication with a hostile server. For this reason, DNS queries should only be permitted to trusted external servers, many of whom can provide filtering services to further limit the ability to introduce malware to the network.
Security would be simpler if the entire network were physically behind the router and firewall. However, most businesses find that allowing remote access increases productivity and improves employee satisfaction. The amount of control IT management can exercise over these devices is generally less.
The CIS control recommends requiring all remote access to use two-factor authentication for logins. If those devices fall into the wrong hands or if someone steals the password, an additional factor such as a token or a text message will make it harder for them to take advantage of it.
If the business lends devices for use outside the office, it should set up remote device management for them. This will ensure they stay up to date on patches and have a secure configuration. In the case of cell phones and other smart devices, it should include remote wiping. BYOD devices should meet company-set security standards before getting access.
Business partners that connect to the network can be a serious risk if they don't observe high security standards. The business needs to specify security standards which connected partners have to meet, then monitor their access.
by Perry Lynch
3:30 min read
Firewalls, routers, and switches play a critical role in network security. How well they succeed depends on the level of attention administrators pay to their configuration. CIS Control #11 addresses the need to configure network devices carefully and avoid mistakes that could let intruders in.
Remember that it's not just the network perimeter that needs protection! Every switch and access point in the network needs to stay secure. It may take some initial effort to do this but keeping them secure is not too difficult as long as there are procedures in place and they are followed routinely. Software automation can also be used to keep the task manageable.
Most of the measures described in this control can be summarized as always providing accountability for the configuration and maintenance of network devices. It should always be possible for administrators to find out what the device configurations are, what has been changed, by whom, and why. This should be managed as part of a change/configuration management process that is used throughout the enterprise.
Configure all devices securely
Although every network device needs individualized configuration, there is a known pattern to the configuration process, and the default setup in most systems is geared more towards convenience than security. A strong configuration changes the administrative account name, implements two-factor authentication, and disables all unnecessary services. In particular, all command-line access should be via SSH V2, with Telnet disabled. Administrative access to the devices should only be permitted from within the network environment; access from the Internet should be disabled prior to implementation.
A configuration management process should be established and used to record secure configurations for each device. Along with keeping track of the standard secure configuration, this enables network administrators to run periodic comparisons of the current state against the recorded standard to ensure consistency of configs and allow audits against the change management process. Automation tools are valuable for checking all network devices regularly and reporting any discrepancies.
Sometimes it's necessary to make exceptions for specific business purposes, such as allowing a port which isn't normally open. The first step in doing this should be a risk assessment, weighing the loss of security against the need to get something done. When the need for it is over, administrators should revoke it. These temporary changes should be tracked in an open service desk or change management ticket to ensure they are returned to normal and not forgotten.
Keep patches up to date
It may seem obvious that all network devices should have the latest security patches, but the practice can be complicated: patching a router or firewall usually requires at least a little downtime, and there's a risk that it won't come back up properly. Updated devices will also need testing afterwards to make sure their functionality hasn't changed.
Every patch which becomes available should be evaluated for its importance and its impact on the network. It may be safe to skip over one which just improves performance, but a patch which includes serious vulnerability fixes needs to be installed as quickly as is consistent with good management and your organization’s policies.
Automated testing will let the IT department know quickly if there are any problems with the patch. If there are, they can work on fixing the problem or fail over to another device.
Limit administrative access
The control recommends isolating administrative access from normal network usage as much as possible. Ideally, just one machine should handle all administrative tasks. This system should function primarily as a console, with limited domain rights and with Internet access restricted to select vendor support sites if at all possible.
The goal is to limit the opportunities to compromise the admin system. If the only way to change the device settings is from one specific system or subnet, unauthorized attempts will be very difficult to accomplish. Using just one machine also simplifies logging and accountability.
The network ought to be segmented so that other machines can't access the administrative computer. A VLAN within the business network will let the administrative machine communicate with the network devices but not have any direct connection with the business portion of the network. Another approach is to have a separate network interface controller for the admin machine.
by Dwayne Stewart
3:45 min read or Audio
In the event of a security breach of your network, it is likely that the attackers have altered or destroyed important data and security configurations. The tenth CIS control, data recovery capabilities, addresses the importance of backing-up system data and properly protecting those back-ups. By doing so, you ensure the ability of your organization to recover lost or tampered-with data.
Every minute your network is down is productivity lost. Administrators must ensure up-to-date and functioning restoration data has been properly protected using physical safeguards and data encryption - both at rest and in transit. Failure to establish a reliable and secure data recovery solution could mean the difference between a smooth return to standard operations and scrambling to rebuild systems for days, or weeks--just to get back to where you were before the data loss. No one wants that.
A step-by-step breakdown of the proper controls to ensure you can recover your data:
Ensure Regular Automated Backups
A fundamental component in the implementation of an efficient backup process is automation. Humans are prone to err. Beyond mental lapses, we are susceptible to illnesses and mobility-limiting natural disasters to list a small subset of possible contingencies.
Numerous applications are available that can streamline the backup process and achieve data redundancy. Maintaining a redundant set of up-to-date backups at an off-site facility is essential and can help ensure data recovery in most situations. A useful rule-of-thumb is 3-2-1:
Perform Complete System Backups
It is important that a comprehensive backup strategy be implemented. This should allow for the speedy recovery of data, whether it be a few specific files, or an entire server. One useful technique for scheduling system back-ups is the Grandparent-Parent-Child system:
Test Data on Backup Media
All the automation in the world won't save you if your backups are corrupted. The integrity of both your backup system and the system images themselves must be tested regularly.
CIS Control #10 states, "Once per quarter (or whenever new backup equipment is purchased), a testing team should evaluate a random sample of system backups by attempting to restore them on a test bed environment."
Variations of the Grandparent system explained above can also be easily adapted to work here.
Backups could be directed to various locations, such as network-attached storage, removable media, or a cloud-based datastore. The size and budget of your department will directly affect what approaches are feasible for you. It is important to ensure that onsite backup data is not directly accessible by other hosts on the network. Direct access to backup data should be limited to the backup utility used to perform backup and restore activities. Ideally, archived data should be stored offsite and offline with physical safeguards.
The biggest mistake you can make is assuming your organization will not be targeted. Do not assume that because you are not handling government secrets it is alright to leave the removable media holding your backups sitting on your desk. Physical security measures for media containing backup data must be enforced as rigorously as those pertaining to the network. It is also important to ensure that backup data destined for off-site storage is encrypted when saved to removable media.
Ensure Backups Have At Least One Non-Continuously Addressable Destination
More explicitly, CIS control #10 specifically urges that "...all backups have at least one backup destination that is not continuously addressable through operating system calls."
The operating method of hackers is, after gaining a foothold in the system, to enumerate the systems present in your network, slowly mapping its architecture and attempting to escalate privileges across multiple points.
Because of this, it is unsafe to assume that any backup data accessible through your network is ultimately safe. As mentioned in the 3-2-1 method, and explicitly urged in CIS Control #10, at least one back-up should be located offline and preferably offsite.
The most important ideas to remember when designing your backup systems are
Addressing each of the above items will help to ensure the safety and recoverability of your network systems and company data.