by Marian Bodunrin
2:30 min read
When properly implemented, Control #6 can bring an organization’s security program to a higher level of maturity. Maintaining, monitoring and analyzing audit logs helps gain visibility into the actual workings of an environment. Also, with proper implementation, the control can help detect, understand or recover from an attack.
Despite best practices, it is impossible to safeguard a network against every attack. Therefore, when a breach occurs the log data can be crucial for identifying the cause of the breach and help in collecting evidence for use. That is, if the logs were configured properly before the incident occurred.
Deficiencies in security logging and analysis allow attackers to hide their location, malicious codes and activities on victim’s machines. Without protected and complete logging records an organization is blind to the details of an attack which can go on indefinitely and cause significant damage.
To ensure readiness, and effective log maintenance, monitoring, and analysis, the Center for Internet Security (CIS) recommends the following controls:
Maintaining security logs and actively using them to monitor security related activities within the environment is essential, especially during post breach forensic investigation. Therefore, an organization must develop procedures to actively review and analyze logs in real time so that attacks can be detected quickly with appropriate response time. It's one of several best practices for an environment to achieve a safer, better, cybersecurity posture.
by Andrea Lee Taylor
We have considered individually the Center for Internet Security’s top 5 controls for effective cyber defense. Together, they are a force. Perhaps you’re already aware of CIS’s statistic. Of the 20 controls, to implement just the top 5 reduces known cybersecurity vulnerabilities by 85%. If I got that kind of return from the stock market I’d be retiring. Next week.
And it’s not that the recommended set of actions are impossible to implement--far from it! A shift in focus may be required, but we find most employees, most board members, are amenable. To implement procedures and processes means people may be inconvenienced, even personally so. But more often than not they are open to adopting and adapting when it is for the overall good, even the good of the organization.
When people are educated as to what is important, why it is important and, more importantly, how they can help—it’s been our experience they are more willing to be a part of what is being asked rather than a speed bump to greater security.
CIS has a resource that is not news; neither are the controls. Updated periodically, you can download the latest CIS Controls (V7) and read a white paper Practical Guidance for Implementing the Critical Security Controls (V6). It is a way and a place to start. The return on investment is in strengthened cyber defenses and protection, streamlined administrative security functioning and ultimately a savings in financial resources. That is not to say that this isn’t an ongoing work without financial backing. It is. But job security and interesting challenges are important, and being one breach away from exigency is no way to live or conduct business.
Someday the CIS Controls advice will not be revolutionary in its results because it will be boringly customary. Yet the controls have not been implemented to such an extent as to render their advice moot or their results less than stunning.
They’re that worth implementing.
Update: V7 of the Controls adds Control #6 to the basic list of controls. Their approach is always one that keeps an eye on the current threat landscape as well as the latest tools developed in cyber defense. And still, the essential remains the same--making sure the basics are covered makes an exponential difference in an organization's security stability.
by Dwayne Stewart
3:30 min read
A compromise of any account is a problem, but it's especially serious when an outsider gains access to an administrative account. An intruder with full control of a device, website or database and can do serious damage. CIS Control #5’s message is to apply strict control to the level of access that end-users have to network resources, ensuring that each user is granted just the necessary access required to perform their job duties.
Doing this can be unpopular among your users and can create feelings of untrustworthiness in those who are refused administrative privileges on their machines. End-users would much rather have the convenience of not having to rely on IT support staff to perform certain actions on their workstations, and some applications seem to require an admin account for no really good reason. Convincing executives that they shouldn't have administrative access can be a tough job.
All staff need to understand the necessity for stringent management of account privileges. It’s important to educate them about the inherent risk of using accounts with elevated privileges for general every-day tasks on their workstations. If an administrative account is hijacked, not only is all the data on the machine compromised, but the machine itself can now be used perform additional attacks on other network devices that it can access. The potential consequences that result from a compromised account are significantly reduced if that account has standard user account privileges.
Limit creation of accounts
Ensure that administrative accounts are only created for those employees that require them to perform administration of the various systems for which they are responsible.
Not all users that perform administrative tasks require administrator accounts or administrative privileges. Many systems have the option to make users members of certain pre-defined roles that allow them to perform certain administrative tasks, but not others; this provides them the required privileges without granting unlimited access. For example, in content management systems such as WordPress, it's straightforward to assign a specific pre-defined or custom role to accounts. Editors can manage content but can't install plugins or create new users. In Active Directory, the necessary rights to network resources can be assigned to domain security groups; domain users can then be assigned to those groups in order to more easily manage user rights throughout the network. Properly managing the level of access users have to both their own workstations and various applications on the network largely eliminates any need to assign administrative rights to those users that are not system managers.
Limit use of accounts
Even those who have administrative accounts shouldn't use them for non-administrative tasks, like checking email or researching an issue on the Internet. A phishing message opened while running as an administrator could have nasty consequences. Have system administrators login to their workstations using standard user accounts. To run applications or execute commands that require elevated privileges, they can use the “runas” feature in Windows, or “sudo” on a Unix or Linux machine. This allows admins to perform their duties without being logged in as an administrator.
Protect the accounts
Strong passwords should be required for all accounts; especially those that are used for system administration. It is imperative that passwords be changed for default accounts on all network devices during the initial configuration. The username should be updated as well, if possible. If that’s not an option, consider creating a new administrative account with a unique username and strong password; follow that up by disabling the default account all-together. If there will be more than one person administering a system or device, an appropriately configured account should be created for each one of them. This establishes accountability for all actions performed on the device.
It’s good practice to use an authentication server, such as TACACS+ or RADIUS, to manage administrative access to network resources that support it. This is an efficient and more secure method of managing both who has access to a network device, as well as the level of access they have on that device.
Another approach is to implement multi-factor authentication (MFA), which requires a combination of two or more types of authentication factors. An authentication factor can be something a user knows (username, password, PIN), something a user has in their possession (key fob, one-time password) or a biological trait of the user (fingerprint, voice, vein patterns). For example, logging into a firewall management interface could require the administrator’s username and password, as well as a temporary code on a security token or a one-time password (OTP) sent to their phone. Each factor provides an additional layer of security, which makes it much more challenging for an attacker to use valid credentials to gain access to a system.
Persuade the users
With Control #5, one of the biggest challenges is increasing the security awareness of the system managers. Convincing them to embrace and follow policies and procedures that help prevent the compromise of their administrative accounts can take time, especially if they are not typically security-focused individuals. While having administrative access is convenient, it significantly increases the potential of a network breach if an account with elevated privileges is somehow compromised. The way to frame the issue is in terms of risk reduction rather than prohibition. System managers need to understand that they're helping to make systems safer, which helps prevent network breaches and the resulting reputation damage and significant financial consequences. If administrators understand this, they may be more likely to accept the additional restrictions applied on their administrative accounts.
by Brian Nelson
1:30 min read
As the pace of security breaches continues to accelerate, a common thread in most breaches is the exploitation of a technical vulnerability--in either the operating system or an application running on top of the operating system. Just in the past two years at Anchor Technologies every breach investigation we have been a part of was associated with a known technical vulnerability. The epic Equifax breach was of a technical vulnerability that was public knowledge for months prior to the breach. An annual vulnerability assessment is no longer sufficient to protect your organization.
When it comes to technical vulnerabilities, many organizations are making themselves easy targets by either only scanning their external IP’s or scanning their internal networks just once a year. If you focus solely on your external exposure, you are ignoring over 90% of your risk.
Most breaches occur through the exploitation of internal resources, and if you are only looking at those internal assets once a year, it is quite likely those assets will have unpatched critical vulnerabilities. Malicious actors know, and count on, this.
To help make your organization a more difficult target, we recommend the following actions:
Implementing a robust scan-and-patch program may seem daunting in the short run but the payoff is exponential. What is the reduction of your cyber risk worth?
by Dwayne Stewart
3:30 min read
Vulnerabilities on Internet connected systems are targeted on a daily basis. The fourth CIS control addresses the need to keep them protected. "Continuous Vulnerability Assessment and Remediation" addresses keeping up with and fixing newly discovered security issues.
The need for vigilance
Every day, cyber security researchers find new security flaws in software. These software vulnerabilities are generally announced once a patch has been made available. However, once new vulnerabilities are announced, that information is available to both system managers and criminals alike. System managers need to determine whether or not these vulnerabilities exist on their systems and act on the information as quickly as possible to mitigate detected risks.
Vulnerabilities can arise from system misconfiguration as well as software flaws. For example, a host that is accessible from the Internet could expose functionality that should only be available locally, such as access to management interfaces. An external scan should discover these issues.
A vulnerability management process is necessary to keep up with the number of published vulnerabilities. A comprehensive process will identify vulnerabilities and recommend the necessary patches or configuration changes. This should be followed up with patch deployment and remediation scans to ensure that updates were successfully applied.
Even one existing critical vulnerability could allow an attacker to take complete control of a system. Therefore, it is important that the appropriate vulnerability scanning and patch management tools are implemented to identify and remediate the various points of risk throughout the company’s network.
Vulnerability scans should be performed from both an internal and external perspective to get a complete picture of what vulnerabilities exist on a network. External scans provide information around the exposure of an organization’s systems to the Internet. It should also highlight potential misconfigurations of the services on those hosts that are exposed to the Internet. Internal scans should detect vulnerabilities on all internal hosts accessible by the scanner, as opposed to just those services exposed to the Internet through the gateway firewall. To ensure early detection of new vulnerabilities, scans should be performed monthly at a minimum.
It is generally recommended to perform authenticated scans to get the most comprehensive and accurate set of results, including additional information about versions of installed software, missing patches, insecure configurations and the potential malware on scanned hosts.
Once scanning is completed, remediation efforts should be prioritized based on which vulnerabilities present the greatest risk to the organization. An immediate effort should put into addressing all discovered critical and high vulnerabilities. Higher priority should also be placed on those specific hosts that contain sensitive data, are considered to be mission critical, or are directly accessible from the Internet.
There are certain risks associated with patch deployment. Patches should be tested before being applied to verify whether or not they will have an adverse effect on dependent software. Additionally, it’s important that the process of deploying patches doesn’t disrupt operations; chances are you don’t want forced reboots on servers and workstations during business hours. This could possibly lead to data loss and will certainly lead to very upset users. This highlights the fact that communication is also a key component of the patch deployment process. It’s important to keep system managers and end-users abreast of planned patch deployment.
Also, it isn't just ‘computers’ in the usual sense that need patching. Any device with a processor and firmware could have security issues that could potentially lead to the compromise of that particular device as well as a potential network breach. Devices such as printers, scanners and routers all need to be routinely updated as a part of the patch management process.
Keeping up means greater safety
Unpatched software vulnerabilities are a major factor in system breaches. The cost in lost data and time spent recovering it as well as the hit to reputation is, more often than not, huge. A systematic approach to detecting and fixing security holes should stop the large majority of threats. It's an ongoing, sometimes tedious task; but it is beyond necessary, it is a vital work.