Creating A Security Baseline by using Security Templates

In this post I am going through the process of creating a security baseline in some easy steps. The tool we use to create a security baseline is Security Templates snap-in which is accessible through Microsoft Management Console. In this post we are not going through the policies actually and I am not going to show you which settings should be enabled or disabled or configured since it totally depends on your environment. What I am trying to show you is how you can create templates and how to apply them.

As the first step in the Run, type in mmc and then from the file menu click on Add/Remove Snap-in… to see the window in the following picture:

Then from the list choose Security Templates and then press OK to add it to the console:

If you double click on it and open the sub-folders, you will see all the security policies that you can enable, disable or configure. Let’s say in this example below, we define Audit Logon Events for both successful and failed logon attempts:

Once all the security policies are configured, you just need to right click on Server Baseline and click on Save as to save the baseline policies.

Now let’s say you want to apply these baseline policy settings to an OU, Domain, site or maybe even a local computer. So you just need to open Group Policy Editor through Group Policy Management Console in your domain or by just typing gpedit.msc on your local computer.

Once open go to Computer Configuration -> Windows Settings and right-click on Security Settings and choose Import Policy and then look for the baseline file and then open it.

Now if you check your Group Policy settings, you will see that all the settings in your baseline template have been applied to the Group Policy Object that you imported the policy on.

Since you have saved the template file, you can use it anytime you want as your security baseline file to import its settings to any GPOs in your domain environment or even on your local computer.

You want to learn more specifically about this topic? Check out my new book below and have access to great and practical tutorials and step-by-step guides all in one book:

To get more information about the book click on the book below:


Best Wishes

Forefront Endpoint Protection 2012 Beta is out…

It’s been a while Microsoft has been working on its Forefront products and they have been really successful in offering products that are of great help to increase the security of the network environment and systems both on the server-side and client-side.

I personally love them all and have more experience working with Forefront Threat Management Gateway 2010 and truly enjoyed all the better features it has in comparison with ISA Server 2006. When you have the experience of working with a product for quite a long time and wish for some added functionalities, once the new product is out and gives you all of them, that’s when you have a really good feeling. I guess Forefront products give you exactly the same kind of feeling.

Now Forefront Endpoint Protection 2012 Beta is out. Before we go a little bit further with the features, let me give you an overview on what it is. FEP helps businesses efficiently improve endpoint protection while decreasing the costs. It is built on System Center Configuration Manager allowing businesses to use their existing client management infrastructure to manage the endpoint protection. Right now this beta version is compatible with System Center Configuration Manager 2012.

But let’s see what’s new in this new beta release of FEP:

  • Supporting System Center Configuration Manager 2012
  • Improved real time alerts and reports
  • Role-based management
  • User-centric reports (post beta)
  • Easy migration from FEP 2010/ConfigMgr 2007
  • Support for FEP 2010 client agents
FEP 2012 provides protection against known and unknown threats using advanced techniques such as behavior monitoring, network inspection system and heuristics. It also has a real-time cloud-based update system through Spynet service helping it to stay up-to-date.
If you need more information on FEP you can click here and here.
I hope you will be among the early adopters of FEP 2012.

The Enhanced Mitigation Experience Toolkit (EMET)

In the previous posts of my blog we talked a little bit about security exploits and how they function and how to prevent from attacks using security exploits. In this post I am so excited to introduce a great toolkit offered by Microsoft to defense against the exploitation of the system.

The tool is called Enhanced Mitigation Experience Toolkit (EMET) which uses exploitation mitigation techniques making it very difficult for exploits to defeat the system. However the protection applied by EMET does not guarantee that the system will not be exploited but it just makes it as difficult as possible to exploit the system even using a 0-Day vulnerability exploits. 

Working with EMET is pretty simple and you just need to download it from here  and then install it on your machine and simply choose the software that you want it to protect and you believe is more probable to have a security vulnerability and then you are all done. It is possible through the GUI interface of the tool.

EMET is compatible with any software and it does not really matter whether the software you want to protect is a Microsoft software or not. Below is a screenshot of the GUI interface of the toolkit:

You should for sure try this tool as it’s a must for every security engineer worrying about the security of their environment with all those softwares installed on their servers which each could have possible security vulnerabilities putting the whole network and system at risk.

You want to learn more? Check out my new book below and have access to great and practical tutorials and step-by-step guides all in one book: 

To get more information about the book click on the book below:



Security from the Inception !!!

The experience shows that consumers whether they are ordinary people using their computers for everyday tasks or even experienced network administrators never tend to be very open to security updates. Talking to so many network admins about security updates especially Service Packs (They do not necessarily include only security updates) of operating systems especially Windows Server, they mostly didn’t show much interest for installing some specific updates and service packs for some reasons:

  • They thought of some of the security updates as unnecessary
  • Some of them believed it is too risky to install some of the updates due to a fear of possible service crackdown. Some also believe some hotfixes and security patches are not compatible with some other services and could possibly create problems
  • They mostly considered service packs as unnecessary update packages with this reasoning that they have already installed those needed hotfixes and the rest included in the service packs are unnecessary
In my own experience I’ve always seen people hit by a pretty famous worm on the Internet like Sasser and even after that they were always looking for some virus removal tool to get them out of the trouble and not a security patch unaware of the fact that an anti-virus software can not stop a worm from functioning.
So you can see that security people at Microsoft are on a very difficult road to educate all those users and admins and kind of convince them that patching a system is the best thing to do for every user to stay safe on the Internet. But here it comes another concept called Security from the Inception which says instead of going through all these difficulties of educating the users which seem pretty impossible at times, a much better approach is to try to secure the code of the products by applying SDL (Security Development Lifecycle) from the beginning of the development of a product. That is how we can reduce the impact of security vulnerabilities missed during the software development process.
Right now Microsoft is on the right track in developing more secure code by only applying SDL as we can see less security vulnerabilities in its products.

Microsoft Domain and Server Isolation

What many network engineers think of infrastructure security is securing different layers of the network and having many firewalls and usually layer-3 devices installed mostly between the internal and perimeter layers or between the perimeter network and the internet so that they can configure them in a way to stop the outsiders’ threats on the internal network and also the DMZ. Many think that the more we isolate the internal network from the outside, the better security we have inside the network. So in this way of thinking, network has always been thought to be under attack and threat from the outside. because the hackers and all those malicious people have always been thought to be coming from the world of the internet.
The thought that I was shortly talking about is what brings a lot of security risks and problems to the networks and therefore businesses especially in enterpise environments. This type of protection in today’s world of technology is inappropriate and basically has no benefits to the whole business because of the fact that too much thinking about the outside world and outside attacks has made us forget all about the insiders’ threats imposed by the malicious users from inside the organization or business. Just to give things a better image of reality, malicious insiders’ threats has been ranked the secnd in 2010 and the first in 2009 among the top 10 information security threats as officially has been announced by Perimete E-security Co.
What makes this type of threats so common? Trusts on the internal users that lead them to do many ilegal activities.
What mentioned above about malicious insiders leads us to the concept of defence in depth. what’s important in this concept is the fact that instead of looking at the security as a whole, we should start securing the network from its smallest part which is a computer. If that single computer is secure and every connection to the other computers and servers is monitored, then there is no reason to worry at all because we can control who the computer is eligible to communicate with. in this concept not only the user in the domain needs to be authenticated and authorized but also the computer needs to be authenticated and authorized. if the computer account in the domain is not authenticated and therefor not trusted then the username wouldn’t work at all for the hacker.
Now imagine a hacker connecting his laptop to the network illegally due to the absence of physicall security. This hacker happens to have the enterprise administrator’s password in the domain. Using the Domain and Server isolation, the hacker would not be able to connect to any of the compuers because his computer is untrusted by the others.
To put things in a nutshell, domain and server isolation model makes use of the ipsec protocol suite to categorize the hosts into some groups, so that it can better control the communication between them. The categories of hosts are as follows:
-Trusted: Those ipsec-enaled hosts which are joined to the domain
-Untrusted: Those non ipsec-enabled hosts which are not joined to the domain or are joined to a domain which is not trusted (These hosts are not able to communicate with the trusted ones)
-Boundary: Those ipsec-enabled hosts that are able to communicate both with trustd and untrusted hosts
-Exempted hosts: Those especial computers or servers that for any reason need not to use IPSec and every computer must be able to communicate with them. i.e: DHCP Server
By what mentioned above, you have a basic understanding of what is the concept of defence in depth and what is domain and server isolation, but to be more detailed and exact, in order for two computers to trust each other they need to start an ipsec-secured communication in which both must follow the same encryption, inegrity and authentication methods using which the computers can trust one another and therefore be authenticated and authorized in the domain.
If two computers have the same methods of encryption, authentication and integrity then they fall into the category of trusted hosts and can communicate with one another, otherwise they are considered untrusted and are not able to make any connection to those trusted ones. In Microsoft implementations, hosts receive the IPSec policies using Group Policy Settings in the domain. The administrator specifies those trusted and untrusted hosts and based on that applies the required policy to them. Now consider the same host that is illegally and physically connected to the network. what is the computer called then in the domain? well.. right… Untrusted…
There are alot to talk about when it comes to isolation inside the internal network. To give things a good start, you could start from this link on Microsoft website

You want to learn more? Check out my new book below and have access to great and practical tutorials and step-by-step guides all in one book: 

To get more information about the book click on the book below: