by Perry Lynch
3:30 min read | Audio
Account hijacking lets criminals impersonate employees and contractors. They can trick others into getting information and gain access to systems. It's especially dangerous when they get control of inactive accounts, since they might escape detection for a long time. CIS Control #16 presents ways of preventing account theft and detecting it if it does happen.
How to gain control of accounts
Phishing, brute-force password guessing, and gaining physical access to unattended workstations are some of the ways a would-be invader can steal user credentials. Some users make it easy for the attackers by using common passwords or writing them down where visitors can see them. If a user has a mobile device that logs in automatically, someone who steals it can get into the accounts without further effort.
If the attacker can successfully impersonate the victim by sending and receiving emails from a spoofed account, they may be able to gain access to other accounts by requesting a link to reset their passwords. This is most effective when no one else is currently using the account. Otherwise the account owner may notice the emailed link and suspect something is wrong.
A successful impersonator can email other users and convince them to send confidential information or arrange wire transfers. It could be a while before anyone recognizes the impersonation.
Managing account lifecycles
Deactivating stale accounts reduces the opportunities for impersonation. It also protects against actions taken by disgruntled ex-employees or contractors who might take illegal advantage of their continuing access. A process should be implemented to disable accounts when employees are terminated or contractors complete their current tasks.
Activity monitoring can catch any accounts that have slipped through the cracks and gone dormant without being closed. A well-structured monitoring system can also detect spurious logins at times when the user wouldn't normally be working, as well as attempts to log into deactivated accounts.
Preventing account theft
Every hijacking method warrants its own type of defense. Password theft can be thwarted with a requirement for strong passwords (CIS recommends 14 characters or more). Two-factor authentication will make it harder to use stolen passwords. All authentication should, of course, use encrypted protocols.
Although CIS no longer recommends frequent password changes as a method of protection, it’s still a good idea to change them on a regular basis. Consider that the most effective way to meet a password length requirement is to exceed it: Use passphrases that are complete with punctuation. These can be easily remembered, which reduces the odds that users will write them down or that attackers will decipher them.
Password files need to be encrypted or hashed and be accessible only to administrators. Although current operating systems use password hashing and protected databases, there are other avenues: Many departments keep a file of account credentials in a shared folder or network drive. These should be migrated to trusted credential management platforms, using current encryption and authentication methods to ensure that only authorized users can access them.
Having accounts automatically log out after a period of inactivity reduces the chance for anyone to walk up to an unattended computer and use it. Alternatively, the system can require re-entry of the password after a short time and then let the user continue the same session.
Detecting hijacked accounts
This requires logging of account activity and analyzing it. Inspecting the log for an unusual number of failed logins, or off-hours activity, is an option available to all system managers.
Staying on top of all the accounts that an organization issues keeps opportunists from taking control of them. Tools for centralized account management help in implementing this. Keeping the list of active accounts winnowed down to the ones that are currently in use means fewer accounts to attack and fewer that can be taken over without being noticed. With ongoing monitoring of account usage, would-be intruders won't find available as many opportunities to pillage.
by Perry Lynch
3:45 min read | Audio
Wireless access presents a special challenge for network security. A weak security implementation will allow intruders to gain an almost physical level of access; they may be able to bypass your firewall and directly connect to your information systems from locations that are within range of your facilities. CIS Control #15, "Wireless Access Control," provides guidance to minimize this risk.
The risk factors
Unmanaged wireless devices in the hands of trusted users present a significant risk: They provide access to information for trusted users, and are sometimes considered to be part of the network. However, they are not consistently managed or maintained, and are routinely exposed to malware and opportunities for corruption when they are not on your protected enterprise network.
To counter these risks, the access point should be considered as much a policy enforcement tool as it is a network gateway. Your network of access points should be maintained at current patch levels and at the highest possible encryption levels and configured to provide secured access to the enterprise network for corporately-owned devices. Guest devices, either staff or visitor-owned, should be restricted to a network segment or VLAN that provides access to the Internet only. To further limit risk, access points should also be configured to prevent ad-hoc wireless networking and direct client-to-client access within the Wireless LAN.
Configuring access points
The 802.11 security standard continue to evolve, with the launch of the WPA3 in the 2nd quarter of this year. The older security protocols, WEP and WPA, have known serious weaknesses and should no longer be used. The TKIP encryption protocol has been deprecated as well. The CIS recommendation is to use WPA2 with AES encryption. AES is the default when using WPA2 on modern devices.
Access point firmware needs to stay up to date. The KRACK vulnerability, discovered in 2017, affected virtually all WPA2 implementations. Manufacturers have issued firmware updates to address this issue; implementing these patches is necessary to maintain security.
If you are planning a future Wi-Fi implementation or upgrade, remember that vendors are submitting device designs for certification to the new protocol, with plans to fully support WPA3 in 2019. Make sure your hardware vendor will support a future-proof implementation to get the most from your investment.
Rogue access points
Unauthorized wireless access points can present a serious risk and should be removed from the network whenever they are discovered. Regardless of intent or configuration, they provide unauthorized and/or unprotected access to the network. Left unsecured, they could provide an unencrypted open access channel into your information assets.
Monitoring software that works from an inventory of authorized systems can recognize any unauthorized devices. This makes it possible to block the offending device from the network, then locate and disconnect it.
Many of the available managed access point solutions include Wireless Intrusion Detection Systems (WIDS) capabilities, providing the ability to detect and disable unauthorized access points or the use of various wireless attack tools.
Limiting other devices
Printers and other devices often include their own wireless access as a convenience feature. In a corporate environment, this should be disabled to prevent the printer from becoming an undocumented entry point to the network.
The use of Bluetooth in the environment is an often-overlooked concern: enabling unregulated pairings may permit intruders to gain direct access to computers on the network. Restricting Bluetooth-based services to only support headsets and input devices is easily handled with group policy and should be implemented whenever the environment contains Bluetooth-capable systems.
Limiting less trusted access
BYOD policies are useful but allowing personal devices to have unrestricted access to the same network your information systems rely on is never a great idea. Even with the most restrictive policies, the IT department doesn't have full administrative control over devices not owned by the organization. A reasonable compromise is to provide access to a guest VLAN, implement restrictive ACLS between it and the enterprise network, and permit out-bound only Internet access on that VLAN.
In any event, only wireless devices that are owned by the organization should be permitted on the enterprise network. This provides the IT staff with the authority to enforce adequate security restrictions for those devices.
Wireless networks provide value and convenience, but they require care and attention to avoid becoming a security problem. Facilities containing highly sensitive information assets should consider using it for guest access only or avoid using it at all. Enterprise networks that do use it need to employ the latest protocols, restrict its use to authorized devices, and be on the lookout for unauthorized access points.
by Perry Lynch
2:45 min read | Audio
The fewer ways there are to reach information, the less risk there is of unauthorized access. This is the point of CIS Control #14, "Controlled Access Based on the Need to Know." This is closely related to Control #13 "Data Protection," but focuses on the access allowed. The specific controls have some overlap, especially regarding encryption and logging. What is distinctive to this control is the emphasis on access control and network architecture.
Identify the Data
Data should be identified and automatically labeled or tagged based on the existing data classification requirements for your enterprise. This can be done using one of several active discovery tools that can investigate the network file shares and desktops to flag documents and folders that match the classification criteria. Upon identification, sensitive files can be relocated into the appropriate data file shares, ensuring that access rights and group policy are easier to maintain and govern.
Isolate the Data
Implementing VLANs for critical servers is a straightforward way to reduce the risk of compromise. Along with servers, VLANs should be configured to support other critical business functions. Micro segmentation should also be enabled, which restricts a user’s ability to directly connect between workstations on the network.
Implementing firewalls or ACLs between each VLAN will ensure that only authorized systems and protocols are permitted to communicate with each other and will significantly reduce the risk of unauthorized data exposure and/or the unchecked spread of malware within the enterprise.
Encrypt the Data
Implementing data encryption ensures that data compromise efforts are increased significantly. Encrypting data at rest for laptops, workstations in insecure environments, and servers containing sensitive data will mitigate against the risk of data compromise.
A mobile device management solution should be implemented for all corporate and user-provided mobile devices that will be permitted to access this data.
Encryption for data in transit should also be implemented for all methods: Transport Layer Security (TLS) should be required for all outbound email communications and for all web-based portals and user interfaces. Command Line access to management interfaces should be through SSH as well.
This mitigation strategy can be further strengthened by taking proper care to use a centralized key management system and to ensure that encryption algorithms and key sizes are reviewed and updated annually.
Protect the Data
Access to the systems containing sensitive data on the server VLAN should be restricted to specific groups of workstations within the network; file systems and database servers should also be restricted to specific groups of users.
User accounts should be configured with specific access rights based on their role within the organization. Administrative users should have two accounts, one with restricted access for normal work activities, and a separate admin-level account for any systems maintenance responsibilities.
Along with these controls, Data Loss Prevention should be implemented as a means of identifying and/or preventing the unauthorized exfiltration of data via USB, email, or web-based communications. DLP solutions typically rely on either common keywords or analysis of predefined data to identify, enforce, and report on policy violations.
Any system or account on the network carries some risk of being compromised. Any account or system with access to confidential data should be limited in order to reduce the chance of successful unauthorized access. Restricting access to critical resources and limiting the access rights of authorized systems and accounts will enable IT personnel to focus on detecting and preventing a smaller range of potential attacks.
by Perry Lynch
4:00 min read | Audio
Everything in systems security ultimately is about protecting data. CIS Control #13, deals with data protection in its most direct sense. The main issues are identifying sensitive data, preventing its unauthorized transfer, detecting any such transfers, and making improperly acquired data as difficult to use as possible.
Identifying critical data
The first step is to identify the data that needs protection. Organizations generally have their data spread over multiple systems with varying levels of security. However, you can successfully protect this data through the use of several tools and techniques: Access control, encryption, integrity protection, and data loss prevention can be used together to identify, restrict, and protect any sensitive or mission-critical data.
A data classification process should be undertaken. Once data is properly classified and labeled as regulated, sensitive, confidential, or public, those files and folders should then be migrated to properly identified folders on the SAN, and group policy should be applied to ensure that access is limited to authorized staff members.
Databases and files with sensitive data should be kept on machines which aren't exposed to outside connections. Access to them should also be restricted to authorized users on the internal network as well, in a manner that’s consistent with business requirements.
Once sensitive data is adequately secured, routine network hygiene needs to take place: Many users will maintain bad habits and keep unsecured copies of sensitive data because it's convenient. Administrators should routinely use appropriate tools to scan desktops and non-secured folders on the SAN for cleartext that looks like sensitive data and alert the appropriate data owners.
Protection by (and from) encryption
Laptops and mobile devices are easily stolen, so if they hold any sensitive information, the entire device needs encryption. Mobile Device Management tools can be used to secure sensitive corporate data for corporate and user-owned phones and smart devices, without impeding the end user’s personal use of the device. Full Disk Encryption should be deployed for all corporate laptops, using a centralized key management system. This will prevent unauthorized users from being able to access the device and any data should the laptop become lost or stolen.
Within the enterprise, encryption is often required in databases and other systems on the network. Many databases contain sensitive fields that require encryption or hashing, independently of the whether the disk is encrypted. Other systems may require the entire database be encrypted.
Methods of encryption need periodic review. Some algorithms that were once considered strong, such as SHA-1, are now deprecated because of their weaknesses. Any data encrypted using them needs migration to a better algorithm.
Encryption is valuable, but it's a problem when it isn't supposed to be happening. If outgoing encrypted traffic is originating from unauthorized desktops, it could be evidence of malware sneaking the data out. Network monitoring software can detect and flag the use of SSH and other secure protocols outside of expected contexts. If they don't have a legitimate purpose, administrators need to track down their source and remove any malware responsible.
Encrypted exfiltration can also tunnel through harmless-looking packets, such as DNS requests. These are harder to detect, but application-level monitoring software can often identify them by characteristics like abnormally long data fields.
Monitoring data movement
Network monitoring can generally recognize dubious packets. These packets could be included in otherwise legitimate traffic, such as an email that carries sensitive information in cleartext. It could indicate malware is at work, but it might also indicate that users are making otherwise legitimate transfers in an insecure way.
This falls into the area of data loss prevention (DLP). Software systems for DLP take a variety of approaches for recognizing abnormal traffic. Most rely on pattern detection, so human verification is generally necessary. Other systems rely on fingerprinting previously-identified data and will operate effectively with a lower level of human intervention. In either case, the software needs to be configured so that the number of false positives is reasonably low and, all alerts will get the attention they need.
Known hostile IP addresses should be blocked and monitored, as attempts to reach them could indicate that malware is trying to send out sensitive data; other destinations could be attempted if the first one is unreachable.
Transferring data within the network is sometimes a concern. Copying sensitive information to mobile phones or portable storage devices increases the risk. It may be a good idea to configure machines to prevent those transfers.
A large part of data protection is simply knowing where the information is and where it's going. Keeping track of all sensitive data storage and limiting its movement are essential practices, and accomplishing that requires safe network configurations, monitoring of traffic, encryption of data, and prompt action when problems arise. Protection needs to be multi-layered, especially when leaks would cause serious harm.
by Perry Lynch
3:30 min read | Audio
Defending network boundaries is an increasingly complicated and difficult task. Cloud services, remote access, and mobile devices can make it difficult to identify the exact boundaries of a network. CIS Control #12, which deals with the defense of network boundaries, is correspondingly complex. It pays to remember that boundary protection isn't just a matter of securing the front lines, it’s also a major component in a layered defense strategy.
Managing the task
Securing the boundaries means paying attention to new threats and attack methods and evaluating them against the needs of the business. Achieving a balance between effective security and user needs will require frequent risk analysis and constant communication with upper management. By doing so you will enable enforcement of an effective and realistic security plan that supports the business needs of your network.
A well-structured network architecture includes not just a DMZ for the limited number of Internet-facing systems, but also specific security zones for internal servers, systems management workstations, and other business-critical systems or applications.
Network scanning is necessary to make sure no one attempts an end run around the proxy. These might come from malware or from impatient users trying to circumvent the rules. Unauthorized VPN connections might send encrypted traffic through the proxy and present a security risk even if its purpose is relatively innocent.
Decryption of network traffic should take place at the proxy level. That lets it apply application-level security on top of IP and port filtering. The proxy will use whitelisting or blacklisting to prevent connections to malicious servers. Whitelisting is safer, but it's difficult to maintain a complete list of approved domains and IP addresses without constantly adding to it. Blacklisting requires constant updating from services that list rogue addresses.
Both inbound and outbound traffic needs filtering. Only ports and protocols that are considered mission-critical should be permitted outbound through the firewall. Additionally, blocking access to known malicious domains will defeat many phishing attempts. If malware can't reach a command and control server, it becomes far less effective, and easier to eliminate.
Intrusion prevention and detection
Preventing unauthorized activities and catching them as they happen are crucial to boundary protection. The Intrusion Detection/Prevention Systems (IDS/IPS) should be configured to alert and/or stop a majority of attempts by catching suspicious traffic. Signature-based detection is the traditional approach, but sandboxing and other methods can be considered as supplemental tools to detect zero-day attacks.
Monitoring should record the headers of any suspicious packets, if not the whole packet. This information is valuable for event monitoring, so that the source of the problem (external or internal) can be identified. Analytics run on this information can turn up patterns that are too subtle to detect from a small sample.
Malicious traffic can piggyback on all kinds of protocols to escape notice. For instance, if large numbers of senseless DNS requests are being sent out, they may cloak communication with a hostile server. For this reason, DNS queries should only be permitted to trusted external servers, many of whom can provide filtering services to further limit the ability to introduce malware to the network.
Security would be simpler if the entire network were physically behind the router and firewall. However, most businesses find that allowing remote access increases productivity and improves employee satisfaction. The amount of control IT management can exercise over these devices is generally less.
The CIS control recommends requiring all remote access to use two-factor authentication for logins. If those devices fall into the wrong hands or if someone steals the password, an additional factor such as a token or a text message will make it harder for them to take advantage of it.
If the business lends devices for use outside the office, it should set up remote device management for them. This will ensure they stay up to date on patches and have a secure configuration. In the case of cell phones and other smart devices, it should include remote wiping. BYOD devices should meet company-set security standards before getting access.
Business partners that connect to the network can be a serious risk if they don't observe high security standards. The business needs to specify security standards which connected partners have to meet, then monitor their access.
by Perry Lynch
3:30 min read
Firewalls, routers, and switches play a critical role in network security. How well they succeed depends on the level of attention administrators pay to their configuration. CIS Control #11 addresses the need to configure network devices carefully and avoid mistakes that could let intruders in.
Remember that it's not just the network perimeter that needs protection! Every switch and access point in the network needs to stay secure. It may take some initial effort to do this but keeping them secure is not too difficult as long as there are procedures in place and they are followed routinely. Software automation can also be used to keep the task manageable.
Most of the measures described in this control can be summarized as always providing accountability for the configuration and maintenance of network devices. It should always be possible for administrators to find out what the device configurations are, what has been changed, by whom, and why. This should be managed as part of a change/configuration management process that is used throughout the enterprise.
Configure all devices securely
Although every network device needs individualized configuration, there is a known pattern to the configuration process, and the default setup in most systems is geared more towards convenience than security. A strong configuration changes the administrative account name, implements two-factor authentication, and disables all unnecessary services. In particular, all command-line access should be via SSH V2, with Telnet disabled. Administrative access to the devices should only be permitted from within the network environment; access from the Internet should be disabled prior to implementation.
A configuration management process should be established and used to record secure configurations for each device. Along with keeping track of the standard secure configuration, this enables network administrators to run periodic comparisons of the current state against the recorded standard to ensure consistency of configs and allow audits against the change management process. Automation tools are valuable for checking all network devices regularly and reporting any discrepancies.
Sometimes it's necessary to make exceptions for specific business purposes, such as allowing a port which isn't normally open. The first step in doing this should be a risk assessment, weighing the loss of security against the need to get something done. When the need for it is over, administrators should revoke it. These temporary changes should be tracked in an open service desk or change management ticket to ensure they are returned to normal and not forgotten.
Keep patches up to date
It may seem obvious that all network devices should have the latest security patches, but the practice can be complicated: patching a router or firewall usually requires at least a little downtime, and there's a risk that it won't come back up properly. Updated devices will also need testing afterwards to make sure their functionality hasn't changed.
Every patch which becomes available should be evaluated for its importance and its impact on the network. It may be safe to skip over one which just improves performance, but a patch which includes serious vulnerability fixes needs to be installed as quickly as is consistent with good management and your organization’s policies.
Automated testing will let the IT department know quickly if there are any problems with the patch. If there are, they can work on fixing the problem or fail over to another device.
Limit administrative access
The control recommends isolating administrative access from normal network usage as much as possible. Ideally, just one machine should handle all administrative tasks. This system should function primarily as a console, with limited domain rights and with Internet access restricted to select vendor support sites if at all possible.
The goal is to limit the opportunities to compromise the admin system. If the only way to change the device settings is from one specific system or subnet, unauthorized attempts will be very difficult to accomplish. Using just one machine also simplifies logging and accountability.
The network ought to be segmented so that other machines can't access the administrative computer. A VLAN within the business network will let the administrative machine communicate with the network devices but not have any direct connection with the business portion of the network. Another approach is to have a separate network interface controller for the admin machine.
by Dwayne Stewart
3:45 min read or Audio
In the event of a security breach of your network, it is likely that the attackers have altered or destroyed important data and security configurations. The tenth CIS control, data recovery capabilities, addresses the importance of backing-up system data and properly protecting those back-ups. By doing so, you ensure the ability of your organization to recover lost or tampered-with data.
Every minute your network is down is productivity lost. Administrators must ensure up-to-date and functioning restoration data has been properly protected using physical safeguards and data encryption - both at rest and in transit. Failure to establish a reliable and secure data recovery solution could mean the difference between a smooth return to standard operations and scrambling to rebuild systems for days, or weeks--just to get back to where you were before the data loss. No one wants that.
A step-by-step breakdown of the proper controls to ensure you can recover your data:
Ensure Regular Automated Backups
A fundamental component in the implementation of an efficient backup process is automation. Humans are prone to err. Beyond mental lapses, we are susceptible to illnesses and mobility-limiting natural disasters to list a small subset of possible contingencies.
Numerous applications are available that can streamline the backup process and achieve data redundancy. Maintaining a redundant set of up-to-date backups at an off-site facility is essential and can help ensure data recovery in most situations. A useful rule-of-thumb is 3-2-1:
Perform Complete System Backups
It is important that a comprehensive backup strategy be implemented. This should allow for the speedy recovery of data, whether it be a few specific files, or an entire server. One useful technique for scheduling system back-ups is the Grandparent-Parent-Child system:
Test Data on Backup Media
All the automation in the world won't save you if your backups are corrupted. The integrity of both your backup system and the system images themselves must be tested regularly.
CIS Control #10 states, "Once per quarter (or whenever new backup equipment is purchased), a testing team should evaluate a random sample of system backups by attempting to restore them on a test bed environment."
Variations of the Grandparent system explained above can also be easily adapted to work here.
Backups could be directed to various locations, such as network-attached storage, removable media, or a cloud-based datastore. The size and budget of your department will directly affect what approaches are feasible for you. It is important to ensure that onsite backup data is not directly accessible by other hosts on the network. Direct access to backup data should be limited to the backup utility used to perform backup and restore activities. Ideally, archived data should be stored offsite and offline with physical safeguards.
The biggest mistake you can make is assuming your organization will not be targeted. Do not assume that because you are not handling government secrets it is alright to leave the removable media holding your backups sitting on your desk. Physical security measures for media containing backup data must be enforced as rigorously as those pertaining to the network. It is also important to ensure that backup data destined for off-site storage is encrypted when saved to removable media.
Ensure Backups Have At Least One Non-Continuously Addressable Destination
More explicitly, CIS control #10 specifically urges that "...all backups have at least one backup destination that is not continuously addressable through operating system calls."
The operating method of hackers is, after gaining a foothold in the system, to enumerate the systems present in your network, slowly mapping its architecture and attempting to escalate privileges across multiple points.
Because of this, it is unsafe to assume that any backup data accessible through your network is ultimately safe. As mentioned in the 3-2-1 method, and explicitly urged in CIS Control #10, at least one back-up should be located offline and preferably offsite.
The most important ideas to remember when designing your backup systems are
Addressing each of the above items will help to ensure the safety and recoverability of your network systems and company data.
by Andrea Lee Taylor
1:45 min read or Audio
Every once in a while in the annals of cybersecurity there is news that isn’t a warning about the newest breach or the release of the latest patch. In this case the news is good for Maryland buyers of cybersecurity.
The General Assembly of Maryland, on April 9th, passed the Cybersecurity Investment Incentive Tax Credit Bill (SB 228). It provides for “…authorizing certain buyers of certain technology to claim a credit against the State income tax for certain costs; providing that the credit may not exceed certain amounts under certain circumstances; requiring the Secretary of Commerce to approve each application that qualifies for a credit…For any taxable year, the credit allowed…may not exceed $50,000 for each qualified buyer.” (LegiScan)
The cyber incentive bill is unique in its agency and platform. Simply restated, it provides for a credit for buyers of cybersecurity services and products from Maryland companies. “This is a first-in-the-nation legislation and we’re looking forward to some really great successes,” said Senator Guy Guzzone (D), primary sponsor of the bill. Cosponsoring were Senators Adelaide Eckardt (R), George Edwards and Andrew Serafini (R).
Cybersecurity it this century’s absolute fact of life. For any business, coupled with the necessity for security are the budget parameters available to fund a flexible, strategic cyber plan. Any financial assistance in obtaining services or products is a welcome support and boost to doing business.
Qualified buyers may claim a credit on their state income tax up to 50% of the cost of the technology or service purchased from qualified sellers. As a qualified seller, we are excited to be able to share in this opportunity.
“Our focus has been strictly cybersecurity for over 16 years now and this legislation is a first and is a great help to businesses. Anchor looks forward to putting our experience to use helping small businesses improve their security posture,” said Anchor Technologies’ CEO, Peter Dietrich.
Cybersecurity is a necessity. A plan for what to implement and when keeps businesses on track in protecting their important data. Knowing one does not having to worry about whether the company’s data is as secure as possible allows owners to concentrate their efforts of conducting and growing a business. Thank you to the state legislators for helping to empower small business in Maryland.
by Marian Bodunrin
4:00 min read or Audio
Transmitting and receiving data via network ports is a necessary evil. Because your network process uses a specific port to communicate to another port there is no avoiding the inherent risk. The most perilous services on a network are the ones you don't know are running. Default system installations often activate services with little or no useful purpose and often go unnoticed. "Shadow IT" operations may start up unauthorized, poorly secured services.
There are 65,535 TCP ports and 65,535 UDP ports. Some of them are more vulnerable than others. For example, TCP port 21 connects FTP servers to the internet but have several vulnerabilities, such as cleartext authentication, which make it easy for an attacker with a packet sniffer to view usernames and passwords. Telnet on TCP port 23 sends data in cleartext which makes it vulnerable to attackers listening in to intercept user’s credentials, and man-in-the-middle attacks. Also, the busiest ports are the easiest for attackers to infiltrate. TCP port 80 for HTTP supports web traffic. Attacks on web clients that use port 80 include SQL injections, cross-site request forgeries, cross-site scripting and buffer overruns.
A well-run, secure network does not expose any service without a reason. The issue arises if no one notices the services that are running, no one may be monitoring them or keeping them up to date. CIS Control #9 addresses the Limitation and Control of Network Ports, Protocols and Services, and gives specific recommendations for avoiding the risk of unmanaged services and ports.
System administrators need an established baseline of what ports and services are supposed to be running on each machine. In addition, they need to run regular, automated port scans. Simple, free software is available that will do the job. The scan should note any differences from the baseline and notify the administrators.
The first time a scan is run, it's likely IT administrators will discover previously unaccounted for or undesirable services, possibly due to oversight. These services should be tracked and disabled upon discovery. Most importantly, perform a periodic performance of port scans on a regular basis to determine which services are listening on the network, which ports are open, and to identify the version of the protocol and service listening on each open port. All such efforts will further reduce the attack vector.
Every software installation carries some risk. It could open up unmanaged ports by default, just because they might be useful in certain cases.
When installing new software, the best practice is to identify any added services and configure it to run only the ones that have value for business operation. Running a port scan before and after installation will verify if any others were added--and all legitimate services should be securely configured.
For an organization to adequately mitigate risks, a layered perimeter of defenses such as application-aware firewalls, network access controls (NAC), intrusion detection/prevention systems should be deployed to avert unauthorized access. “Defense in depth” is the watchword of a good security setup.
Use of endpoint firewalls, removal of all unnecessary services and segmenting critical services across systems, and applying patches as soon as they become available, will reduce your organization’s risk exposure. For instance, a network scan can identify all servers which are visible from the internet--if any don't need to be visible, moving them to an internal VLAN will keep them safe. If they run any unauthorized services that aren't caught, at least they won't be directly reachable from outside.
Running multiple critical services on the same machine is an invitation to trouble. If the same machine runs DHCP, SMTP and HTTP, an attacker that breaches one could jump to the others. Each of those services should have its own virtual or physical machine, with just the ports needed to run it.
It's easy enough to install multiple virtual machines on one computer. That way, each port's services have their own operating system, root file directory, and network settings. If one of them gets compromised, the problem is more likely to stay localized long enough to identify it and fix it.
Minimize to Maximize
Just as building management needs to know every door through which people can enter and how that door is secured against sneaking in undetected, so IT management needs to know every port and service which the servers expose. If they're there for a reason, they should be managed and secured. If there's no reason for them, they could be an unguarded back door to the network, which should be shut and locked. Though it is impossible to eradicate all risk, exposure can be greatly minimized when appropriate controls are put in place to deter an attacker. Implementing CSC #9 will further mature the cybersecurity posture of your organization and a continuous monitoring tool deployed to serve as an ongoing exercise will contribute to the effort of reducing risk and maximizing cybersecurity.
by Marian Bodunrin
4:30 min read or Audio
Malware is a type of computer program designed to infect a legitimate user’s computer with the intent to inflict harm. Malware comes in various forms such as, viruses, Trojans, spyware, worms, etc. Malware is a huge and growing problem, costing businesses millions of dollars and typically exposes or damages vital data. New forms constantly appear and can be hard to catch. CIS Control #8 addresses recommendations that should be implemented to reduce an organization’s risk.
The degree of damage caused by malware varies according to the type of malware, the type of device that is infected and the nature of the data that is stored or transmitted by the device. As a result, defense strategy needs to act on multiple levels. Defenses need to prevent malware from being installed, from running if it is installed, and from spreading if it runs. This is defense-in-depth and requires a strong set of automated tools.
Automated malware detection and removal software is an absolute requirement. It needs to cover everything on the network: servers, workstations, mobile devices, and anything else that has a processor and runs code. Regular updates are necessary to keep up with new threats, and machines should be checked to make sure they're getting the updates. Also, periodic vulnerability scans, along with malware detection and blocking should prevent a network from being compromised and succumbing to a botnet.
Shadow IT increases risk. If people are running machines that aren't authorized, they aren't going to be consistently monitored and protected. The first and second CIS controls stress the importance of keeping track of everything on a network, and malware protection is one of the reasons that makes such inventories so important.
It isn't enough to put protective software on each machine without an overall plan. Defenses are very hard to manage if haphazardly installed. Each machine would need its own updates, and hostile code that gets blocked on one system could get through on another. Centrally administered and automated protection gives your network a more consistent defense.
Keeping track of what protective software finds is important. It should be set up to log all incidents, and part of administrators’ responsibilities is to review the logs. If an issue turns up on one machine, it may be present elsewhere as well. If an attack occurs repeatedly, it's time to check the defenses against it and strengthen them as necessary.
Network monitoring needs to check for traffic that could indicate malware. The most popular malware model today is the Command & Control (C&C), where it reports to a server, sends information, and gets instructions. The monitoring system should log DNS queries in order to catch requests to C&C domains. Effective firewalls can capture suspicious file transfers and block hostile traffic. This isn't limited to blocking ports and IP addresses; the best software can catch malicious packets at the application level, after SSL decryption.
If a device is caught running malware, the network protection software should quarantine it immediately. Keeping malware from spreading buys time to fix the problem in spite of its urgency.
Limiting the attack surface
External devices, such as thumb drives, are inherently convenient and yet they create risks. Many are too trusting of drives received as promotional giveaways, even legitimate ones are sometimes inadvertently infected. Auto-running when devices are inserted is a convenient feature that ought to be buried, and this feature should be disabled on all machines. Thumb drives are the most common, but the caution applies to all mountable devices brought in from the outside.
A solid defense will have anti-malware software scan for each newly mounted device. If there are suspicious files on it, the scan will automatically dismount it. Newly downloaded files need the same consideration. Each one should be scanned, and the ones that are flagged should be blocked from running.
The multi-layered approach
It's unrealistic to expect any defense to stop all malware at the perimeter. There are just too many threats, new ones being invented and unleashed all the time, and some will make it past the first line of defense. Stopping threats requires a coordinated effort in the firewall, devices on the network edge, server protection, and monitoring.
The multi-layered approach is to:
Everyone understands that malware protection is necessary but turning it into a systematic set of practices takes a coordinated effort. Everyone involved needs to be working on the same comprehensive cybersecurity plan.