Forefront Endpoint Protection 2012 Beta is out…

It’s been a while Microsoft has been working on its Forefront products and they have been really successful in offering products that are of great help to increase the security of the network environment and systems both on the server-side and client-side.

I personally love them all and have more experience working with Forefront Threat Management Gateway 2010 and truly enjoyed all the better features it has in comparison with ISA Server 2006. When you have the experience of working with a product for quite a long time and wish for some added functionalities, once the new product is out and gives you all of them, that’s when you have a really good feeling. I guess Forefront products give you exactly the same kind of feeling.

Now Forefront Endpoint Protection 2012 Beta is out. Before we go a little bit further with the features, let me give you an overview on what it is. FEP helps businesses efficiently improve endpoint protection while decreasing the costs. It is built on System Center Configuration Manager allowing businesses to use their existing client management infrastructure to manage the endpoint protection. Right now this beta version is compatible with System Center Configuration Manager 2012.

But let’s see what’s new in this new beta release of FEP:

  • Supporting System Center Configuration Manager 2012
  • Improved real time alerts and reports
  • Role-based management
  • User-centric reports (post beta)
  • Easy migration from FEP 2010/ConfigMgr 2007
  • Support for FEP 2010 client agents
FEP 2012 provides protection against known and unknown threats using advanced techniques such as behavior monitoring, network inspection system and heuristics. It also has a real-time cloud-based update system through Spynet service helping it to stay up-to-date.
If you need more information on FEP you can click here and here.
I hope you will be among the early adopters of FEP 2012.
Cheers

The Enhanced Mitigation Experience Toolkit (EMET)

In the previous posts of my blog we talked a little bit about security exploits and how they function and how to prevent from attacks using security exploits. In this post I am so excited to introduce a great toolkit offered by Microsoft to defense against the exploitation of the system.

The tool is called Enhanced Mitigation Experience Toolkit (EMET) which uses exploitation mitigation techniques making it very difficult for exploits to defeat the system. However the protection applied by EMET does not guarantee that the system will not be exploited but it just makes it as difficult as possible to exploit the system even using a 0-Day vulnerability exploits. 

Working with EMET is pretty simple and you just need to download it from here  and then install it on your machine and simply choose the software that you want it to protect and you believe is more probable to have a security vulnerability and then you are all done. It is possible through the GUI interface of the tool.

EMET is compatible with any software and it does not really matter whether the software you want to protect is a Microsoft software or not. Below is a screenshot of the GUI interface of the toolkit:

You should for sure try this tool as it’s a must for every security engineer worrying about the security of their environment with all those softwares installed on their servers which each could have possible security vulnerabilities putting the whole network and system at risk.

You want to learn more? Check out my new book below and have access to great and practical tutorials and step-by-step guides all in one book: 

To get more information about the book click on the book below:

1

Cheers

Microsoft Domain and Server Isolation

What many network engineers think of infrastructure security is securing different layers of the network and having many firewalls and usually layer-3 devices installed mostly between the internal and perimeter layers or between the perimeter network and the internet so that they can configure them in a way to stop the outsiders’ threats on the internal network and also the DMZ. Many think that the more we isolate the internal network from the outside, the better security we have inside the network. So in this way of thinking, network has always been thought to be under attack and threat from the outside. because the hackers and all those malicious people have always been thought to be coming from the world of the internet.
The thought that I was shortly talking about is what brings a lot of security risks and problems to the networks and therefore businesses especially in enterpise environments. This type of protection in today’s world of technology is inappropriate and basically has no benefits to the whole business because of the fact that too much thinking about the outside world and outside attacks has made us forget all about the insiders’ threats imposed by the malicious users from inside the organization or business. Just to give things a better image of reality, malicious insiders’ threats has been ranked the secnd in 2010 and the first in 2009 among the top 10 information security threats as officially has been announced by Perimete E-security Co.
What makes this type of threats so common? Trusts on the internal users that lead them to do many ilegal activities.
What mentioned above about malicious insiders leads us to the concept of defence in depth. what’s important in this concept is the fact that instead of looking at the security as a whole, we should start securing the network from its smallest part which is a computer. If that single computer is secure and every connection to the other computers and servers is monitored, then there is no reason to worry at all because we can control who the computer is eligible to communicate with. in this concept not only the user in the domain needs to be authenticated and authorized but also the computer needs to be authenticated and authorized. if the computer account in the domain is not authenticated and therefor not trusted then the username wouldn’t work at all for the hacker.
Now imagine a hacker connecting his laptop to the network illegally due to the absence of physicall security. This hacker happens to have the enterprise administrator’s password in the domain. Using the Domain and Server isolation, the hacker would not be able to connect to any of the compuers because his computer is untrusted by the others.
To put things in a nutshell, domain and server isolation model makes use of the ipsec protocol suite to categorize the hosts into some groups, so that it can better control the communication between them. The categories of hosts are as follows:
-Trusted: Those ipsec-enaled hosts which are joined to the domain
-Untrusted: Those non ipsec-enabled hosts which are not joined to the domain or are joined to a domain which is not trusted (These hosts are not able to communicate with the trusted ones)
-Boundary: Those ipsec-enabled hosts that are able to communicate both with trustd and untrusted hosts
-Exempted hosts: Those especial computers or servers that for any reason need not to use IPSec and every computer must be able to communicate with them. i.e: DHCP Server
By what mentioned above, you have a basic understanding of what is the concept of defence in depth and what is domain and server isolation, but to be more detailed and exact, in order for two computers to trust each other they need to start an ipsec-secured communication in which both must follow the same encryption, inegrity and authentication methods using which the computers can trust one another and therefore be authenticated and authorized in the domain.
If two computers have the same methods of encryption, authentication and integrity then they fall into the category of trusted hosts and can communicate with one another, otherwise they are considered untrusted and are not able to make any connection to those trusted ones. In Microsoft implementations, hosts receive the IPSec policies using Group Policy Settings in the domain. The administrator specifies those trusted and untrusted hosts and based on that applies the required policy to them. Now consider the same host that is illegally and physically connected to the network. what is the computer called then in the domain? well.. right… Untrusted…
There are alot to talk about when it comes to isolation inside the internal network. To give things a good start, you could start from this link on Microsoft website

You want to learn more? Check out my new book below and have access to great and practical tutorials and step-by-step guides all in one book: 

To get more information about the book click on the book below:

1