Saturday, February 16, 2013

Introduction to Exchange Server 2010



Introduction to Exchange Server 2010






First things first – let’s cover some basic background: Exchange Server 2010 is an e-mail and calendaring application that runs on Windows Server 2008 and, like its predecessor Exchange Server 2007, can also integrate with your phone system. It is the seventh major version of the product and, while not revolutionary, it does include some important changes and lots of small improvements over Exchange Server 2007.


The scalability of Exchange Server 2010 has improved, especially when compared to the complex storage requirements of Exchange Server 2007. The user experience has also improved in Outlook Web App, and a lot of complex issues have seen solved, or the complexity has been removed, to make the administrator’s life much easier.


In this article I will give a brief overview of what’s changed in Exchange Server 2010, what the new features are, what features have been removed, and how it makes your life as an Exchange administrator easier.


1 Getting Started


Exchange Server 2010 will be available in two versions:


· Standard Edition, which is limited to hosting 5 databases.


· Enterprise Edition, which can host up to 100 databases.


However, the available binaries are identical for both versions; it’s the license key that establishes the difference in functionality. Exchange Server 2010 is also only available in a 64-Bit version; there is absolutely no 32-bit version available, not even for testing purposes. Bear in mind that, as 64-Bit-only software, there’s no Itanium version of Exchange Server 2010.


Exchange Server 2010 also comes with two Client Access License (CAL) versions:


· Standard CAL – This license provides access to e-mail, calendaring, Outlook Web App and ActiveSync for Mobile Devices.


· Enterprise CAL – This is an additive license, and provides Unified Messaging and compliance functionality, as well as Forefront Security for Exchange Server and Exchange Hosted Filtering for anti-spam and anti-virus functionality.


This is not a complete list, and for more information about licensing you can check the Microsoft web site athttp://www.microsoft.com/exchange.


2 What’s been removed from Exchange Server 2010?


As always, as new features come, old features go. There are inevitably a few that have found themselves on the "deprecated list" this time around, and so will not be continued in Exchange Server 2010 and beyond. Since this is a much shorter list than the "new features", we’ll start here:


· There are some major changes in Exchange Server clustering: in Exchange Server 2007 you had LCR (Local Continuous Replication), CCR(Cluster Continuous Replication) and SCR(Standby Continuous Replication) - three different versions of replication, all with their own management interfaces. All three are no longer available in Exchange Server 2010.


· Windows Server Fail-over Clustering has been removed in Exchange Server 2010. Although seriously improved in Windows Server 2008, a lot of Exchange Administrators still found the fail-over clustering complex and difficult to manage. As a result, it was still prone to error and a potential source of all kinds of problems.


· Storage Groups are no longer available in Exchange Server 2010. The concepts of a database, log files and a checkpoint file are still there, but now it is just called a Database. It’s like CCR in Exchange Server 2007, where you could only have one Database per Storage Group.


· Due to major re engineering in the Exchange Server 2010 databases, the Single Instance Storage (SIS) is no longer available. This means that when you send a 1 MB message to 100 recipients, the database will potentially grow by 100 MB. This will surely have an impact on the storage requirements in terms of space, but the performance improvements on the Database are really great. I’ll get back on that later in this article.


3 What’s new in Exchange Server 2010?


Exchange Server 2010 contains a host of improvements and a lot of new features, as well as minor changes and improvements. Over the coming sections, I'll provide an overview of the most significant updates and additions.


3.1 Outlook Web App


The most visible improvement for end-users is Outlook Web App (previously known as Outlook Web Access). One of the design goals for the Outlook Web App was a seamless cross-browser experience, so users running a browser like Safari, even on an Apple MacBook, should have exactly the same user experience as users running Internet Explorer.





Figure 1. Outlook Web App running on an Apple MacBook using a Safari browser!


Outlook Web App offers a very rich client experience and narrows the gap between a fully-fledged Outlook client and Outlook Web Access. To reinforce that experience, a lot of new features have been introduced. To name a few: Favorites, Search Folders, attaching messages to messages, integration with Office Communicator, a new Conversation View (which works very well!), integration with SMS (text) messages and the possibility to create Outlook Web Access policies, which give the Exchange organization administrator the ability to fine tune the user experience. The Web App is a feature which you will find mentioned throughout the book.


3.2 High Availability


The Exchange Server 2007 Cluster Continuous Replication (CCR) and Standby Continuous Replication (SCR) features are now combined into one new feature called database availability.


Database copies exist just as in an Exchange Server 2007 CCR environment and are created in a “Database Availability Group”, but it is now possible to create multiple copies. The replication is not on a server level as in Exchange Server 2007 but on a database level, which gives the Exchange administrator much more fine control and granularity when it comes to creating a high available Exchange organization. The servers in such a Database Availability Group can be at the same location, or other locations to create an offsite solution. There’s also no longer any need to install the Microsoft Cluster Service (MSCS) before setting up the Database Availability Group, as all cluster operations are now managed by Exchange.


3.3 Exchange Core Store functionality


Compared to Exchange Server 2003, Exchange Server 2007 dramatically decreased the I/O on the disk subsystem (sometimes by 70%). This was achieved by increasing the Exchange database page size from 4KB to 8KB and by using the 64-Bit operating system. The memory scalability of the 64-Bit platform makes it possible to use servers with huge amounts of memory, giving them the opportunity to cache information in memory instead of reading and writing everything to the disk.


One of the design goals of Exchange Server 2010 was to use a single 1TB SATA disk for the mailbox database and its log files. Another goal was to allow multi GB mailboxes without any negative performance impact on the server. To make this possible, the database schema in Exchange Server 2010 has now been flattened, making the database structure used by the Exchange Server much less complex than it was in Exchange Server 2007 and earlier. As a result, the I/O requirements of an Exchange Server 2010 server can be up to 50% less than for the same configuration in Exchange Server 2007.


As a result of the flattened database schema, Microsoft has removed Single Instance Storage (SIS) from Exchange Server 2010, but the improvements in performance are much more significant, and more-than-adequate compensation for the (comparatively minor) loss of SIS.


3.4 Microsoft Online Services


Microsoft is gradually moving “into the cloud”. Besides an Exchange Server 2010 implementation on premise, it is now also possible to host mailboxes in a datacenter; you can host your mailboxes with your own ISP, or with Microsoft Online Services.


Exchange Server 2010 can be 100% on premise, 100% hosted, or it can be a mixed environment, with some percentage of your mailboxes hosted and the rest on premise. This is, of course, fully transparent to end users, but it has its effects on the administration. Instead of managing just one, on-site environment, you’ll have to manage the hosted organization as well. This is can all be handled through Exchange Server 2010’s Exchange Management Console, where you can connect to multiple forests containing an Exchange organization.


3.5 New Administration Functionality


As a consequence of the major changes made to the High Availability features of Exchange Server 2010, the Exchange Management Console has also changed rather significantly.


Due to the new replication functionality, the Mailbox object is no longer tied to the Exchange Server object, but is now part of the Exchange Server 2010 organization. Also, since the concept of Storage Groups is no longer relevant, their administration has been removed from both the Exchange Management Console and the Exchange Management Shell. PowerShell cmdlets like New-StorageGroup, Get-StorageGroup, and so on, have also all been removed, although the options of these cmdlets have been moved into other cmdlets, like database-related cmdlets.


Speaking of which, Exchange Server 2010 also runs on top of PowerShell Version 2. This version not only has a command line interface (CLI), but also an Interactive Development Environment (IDE). This enables you to easily create scripts and use variables, and you now have an output window where you can quickly view the results of your PowerShell command or script.


In addition to PowerShell V2, Exchange Server 2010 also uses Windows Remote Management (WinRM) Version 2. This gives you the option to remotely manage an Exchange Server 2010 server without the need to install the Exchange Management Tools on your workstation, and even via the Internet!


One last small but interesting new feature is “Send Mail”, allowing you to send mail directly from the Exchange Management Console - ideal for testing purposes.


3.6 Exchange Control Panel


It is now possible to perform some basic Exchange management tasks using the options page in Outlook Web Access; not only on the user’s own properties, but also at an organizational level. With this method, it is possible to create users, mailboxes, distribution groups, mail-enabled contact, management e-mail addresses etc.





Figure 2. The Exchange Control Panel for basic management functions.


3.7 Active Directory Rights Management


Active Directory Rights Management Service lets you control what users can do with E-mail and other documents that are sent to them. It is possible, for example, for classified messages to disable the “Forward” option to prevent messages being leaked outside the organization. With Exchange Server 2010, new features have been added to the Rights Management Services, such as:


· Integration with Transport Rules - a template for using RMS to protect messages over the Internet.


· RMS protection for voice mail messages coming from the Unified Messaging Server Role.


Active Directory is discussed throughout this book, as the Exchange Server 2010 has a much close relationship with AD than previous versions of Exchange Server.


3.8 Transport and Routing


With Exchange Server 2010 it is possible to implement cross premises message routing. When using a mixed hosting environment, Exchange Server 2010 can route messages from the datacenter to the on-premise environment with full transparency.


Exchange Server 2010 also offers (at last) enhanced disclaimers, making it possible to add HTML content to disclaimers to add images, hyperlinks, etc. It is even possible to use Active Directory attributes (from the user’s private property set) to create a personal disclaimer.


To create a highly available and reliable routing model, the Hub Transport Servers in Exchange Server 2010 now contain Shadow Redundancy. A message is normally stored in a database on the Hub Transport Server and, in Exchange Server 2007, the message is deleted as soon as it is sent to the next hop. In Exchange Server 2010, the message is only deleted after the next hop reports a successfully delivery of the message. If this is not reported, the Hub Transport Server will try to resend the message.


For more High Availability messaging support, the messages stay in the transport dumpster on a Hub Transport Server, and are only deleted if they are successfully replicated to all database copies. The database on the Hub Transport Server has also been improved on an ESE level, resulting in a higher message throughput on the transport level.


3.9 Permissions


Previous versions of Exchange Servers relied on delegation of control via multiple Administrative Groups (Specifically, Exchange Server 2000 and Exchange Server 2003) or via Group Membership. Exchange Server 2010 now contains a Role Based Access Model (RBAC) to implement a powerful and flexible management model. .


3.10 Messaging Policy and Compliance


As part of a general compliance regulation, Microsoft introduced the concept of Managed Folders in Exchange Server 2007, offering the possibility to create some sort of compliancy feature. This has been enhanced with new interfaces in Exchange Server 2010, such as the option of tagging messages, cross mailbox searches and new transport rules and actions.


3.11 Mailbox Archive


Exchange Server 2010 now contains a personal archive; this is a secondary mailbox connected to a user’s primary mailbox, and located in the same Mailbox Database as the user’s primary mailbox. Since Exchange Server 2010 now supports a JBOD (Just a Bunch of Disks) configuration this isn’t too big a deal, and the Mailbox Archive really is a great replacement of (locally stored) .PST files.


3.12 Unified Messaging


The Exchange Server 2010 Unified Messaging Server Role integrates a telephone system, like a PABX, with the Exchange Server messaging environment. This makes it possible to offer Outlook Voice Access, enabling you to interact with the system using your voice, listen to voice mail messages, or have messages read to you. Exchange Server 2010 offers some new functionality like Voicemail preview, Messaging Waiting Indicator, integration with text (SMS) messages, additional language support etc. Unified Messaging is, unfortunately, a little outside the scope of this book, so you won’t find me going into too much detail later on.


4 Exchange Server 2010 and Active Directory


As far as Active Directory is concerned, its minimum level needs to be on a Windows Server 2003 level, both for the domain functional level as well as the forest functional level. This might be confusing, since Exchange Server 2010 only runs on Windows Server 2008 or Windows Server 2008 R2, but that’s just the actual server which Exchange Server 2010 is running on!


The Schema Master in the forest needs to be Windows Server 2003 SP2 server (Standard or Enterprise Edition) or higher. Likewise, in each Active Directory Site where Exchange Server 2010 will be installed, there must be at leastone Standard or Enterprise Windows Server 2003 SP2 (or higher) server configured as a Global Catalog server.


From a performance standpoint, as with Exchange Server 2007, the ratio of 4:1 for Exchange Server processors to Global Catalog server processors still applies to Exchange Server 2010. Using a 64-Bit version of Windows Server for Active Directory will naturally also increase the system performance.


Note. It is possible to install Exchange Server 2010 on an Active Directory Domain Controller. However, for performance and security reasons it is recommended not to do this, and instead to install Exchange Server 2010 on a member server in a domain.


4.1 Active Directory partitions


A Windows Server Active Directory consists of one forest, one or more domains and one or more sites. Exchange Server 2010 is bound to a forest, and therefore one Exchange Server 2010 Organization is connected to one Active Directory forest. The actual information in an Active Directory forest is stored in three locations, also called partitions:


· Schema partition – this contains a “blue print” of all objects and properties in Active Directory. In a programming scenario this would be called a class. When an object, like a user, is created, it is instantiated from the user blueprint in Active Directory.


· Configuration partition – this contains information that’s used throughout the forest. Regardless of the number of domains that are configured in Active Directory, all domain controllers use the same Configuration Partition in that particular Active Directory forest. As such, it is replicated throughout the Active Directory forest, and all changes to the Configuration Partition have to be replicated to all Domain Controllers. All Exchange Server 2010 information is stored in the Configuration Partition.


· Domain Partition – this contains information regarding the domains installed in Active Directory. Every domain has its own Domain Partition, so if there are 60 domains installed there will be 60 different Domain Partitions. User information, including Mailbox information, is stored in the Domain Partition.


4.2 Delegation of Control





Figure 3. The Configuration partition in Active Directory holds all information regarding Exchange Server 2010 in an Administrative Group.


In Exchange Server 2003 the concept of “Administrative Groups” was used to delegate control between different groups of administrators. A default “First Administrative Group” was created during installation, and subsequent Administrative Groups could be created to install more Exchange 2003 servers and delegate control of these servers to other groups. The Administrative Groups were stored in the Configuration Partition so all domains and thus all domain controllers and Exchange servers could see them.


Just shift all letters in the word FYDIBOHF23SPDLT to the left and you get EXCHANGE12ROCKS.


Exchange Server 2007 used Active Directory Security Groups for delegation of control, and only one Administrative Group is created during installation of Exchange Server 2007, called “Exchange Administrative Group (FYDIBOHF23SPDLT)”. All servers in the organization are installed in this Administrative Group. Permissions are assigned to Security Groups and Exchange administrators are member of these Security Groups.


Exchange Server 2010 uses the same Administrative Group, but delegation of control is not done using Active Directory Security Groups, as Microsoft has introduced the concept of “Role Based Access Control” or RBAC.


4.3 Active Directory Sites


Exchange Server 2010 uses Active Directory Sites for routing messages. But what is an Active Directory site?


When a network is separated into multiple physical locations, connected with “slow” links and separated into multiple IP subnets, then in terms of Active Directory we’re talking about sites. Say, for example, there’s a main office located in Amsterdam, this has an IP subnet of 10.10.0.0/16. There’s a Branch Office located in London, and this location has an IP subnet of 10.11.0.0/16. Both locations have their own Active Directory Domain Controller, handling authentication for clients in their own subnet. Active Directory site links are created to control replication traffic between sites. Clients in each site use DNS to find services like Domain Controllers in their own site, thus preventing using services over the WAN link.





Figure 4. Two subnets in Active Directory, one for the main office and one for the Amsterdam Datacenter.


Exchange Server 2010 uses Active Directory sites for routing messages between sites. Using our current example, if there is an Exchange Server 2010 Hub Transport Server in Amsterdam and an Exchange Server 2010 Hub Transport Server in London, then the IP Site Links in Active Directory are used to route messages from Amsterdam to London. This concept was first introduced in Exchange Server 2007, and nothing has changed in Exchange Server 2010.


Exchange Server 2003 used the concept of Routing Groups, where Active Directory already used Active Directory Sites; Active Directory Sites and Exchange Server Routing Groups are not compatible with each other. To have Exchange Server 2003 and Exchange Server 2010 work together in one Exchange organization, some special connectors have to be created - the so called Interop Routing Group Connector.


5 Exchange Server coexistence


It is very likely that large organizations will gradually move from an earlier version of Exchange Server to Exchange Server 2010, and Exchange Server 2010 can coexist, in the same forest, with (both) Exchange Server 2007 and Exchange Server 2003. It is also possible to move from a mixed Exchange Server 2003 and Exchange Server 2007 environment to Exchange Server 2010.


Please note that it is not possible to have a coexistence scenario where Exchange Server 2000 and Exchange Server 2010 are installed in the same Exchange Organization. This is enforced in the setup of Exchange Server 2010. If the setup detects an Exchange Server 2000 installation the setup application is halted and an error is raised.


Integrating Exchange Server 2010 into an existing Exchange Server 2003 or Exchange Server 2007 environment is called a “transition” scenario. A “migration” scenario is where a new Active Directory forest is created where Exchange Server 2010 is installed. This new Active Directory forest is running in parallel to the “old” Active Directory with a previous version of Exchange Server. Special care has to be taken in this scenario, especially when both organizations coexist for any significant amount of time. Directories have to be synchronized during the coexistence phase, and the free/busy information will need to be constantly synchronized as well, since you’ll still want to offer this service to users during the coexistence period.


This is a typical s cenario when 3rd party tools like Quest are involved, although it is not clear at the time of writing this book how Quest is going to deal with Exchange Server 2010 migration scenarios.


6 Exchange Server 2010 Server roles


Up until Exchange Server 2003, all roles were installed on one server and administrators were unable to select which features were available. It was possible to designate an Exchange 2000 or Exchange 2003 server as a so called “front-end server”, but this server was just like an ordinary Exchange server acting as a protocol proxy. It still had a Mailbox Database and a Public Folder database installed by default.


Exchange Server 2007 introduced the concept of “server roles” and this concept is maintained in Exchange Server 2010. The following server roles, each with a specific function, are available in Exchange Server 2010:


· Mailbox Server (MB) role.


· Client Access Server (CAS) role.


· Hub Transport Server (HT) role.


· Unified Messaging Server (UM) role.


· Edge Transport Server (Edge) role.


These server roles can be installed on dedicated hardware, where each machine has its own role, but they can also be combined. A typical server installation, for example in the setup program, combines the Mailbox, Client Access and Hub Transport Server role. The Management Tools are always installed during installation, irrespective of which server role is installed.


By contrast, the Edge Transport Server role cannot be combined with any other role. In fact, the Edge Transport Server role cannot even be part of the (internal) domain, since it is designed to be installed in the network’s Demilitarized Zone (DMZ).


There are multiple reasons for separating Exchange Server into multiple server roles:


· Enhanced scalability – since one server can be dedicated for one server role, the scalability profits are huge. This specific server can be configured and optimized for one particular Role, resulting in a high performance server.


· Improved security – one dedicated server can be hardened for security using the Security Configuration Wizard (SCW). Since only one Server Role is used on a particular server, all other functions and ports are disabled, resulting in a more secure system.


· Simplified deployment and administration – a dedicated server is easier to configure, easier to secure and easier to administer.


I will explain each server role in detail, in the following sections.


6.1 Mailbox Server role


The Mailbox Server role is the heart of your Exchange Server 2010 environment. This is where the Mailbox Database and Public Folder Database are installed. The sole purpose of the Mailbox Server role is to host Mailboxes and Public Folders; nothing more. In previous versions of Exchange Server, including Exchange Server 2007, Outlook clients using MAPI still connected directly to the Mailbox Server Role, but with Exchange Server 2010 this is no longer the case. MAPI clients now connect to a service called “MAPI on the Middle Tier” (MoMT), running on the Client Access Server. The name MoMT is still a code name and will have been changed before Exchange Server 2010 is released.


The Mailbox Server Role does not route any messages, it only stores messages in mailboxes. For routing messages, the Hub Transport Server role is needed. This latter role is responsible for routing all messages, even between mailboxes that are on the same server, and even between mailboxes that are in the same mailbox database.


For accessing mailboxes, a Client Access Server is also always needed; it is just not possible to access any mailbox without a Client Access Server.





Figure 5. The Mailbox Server role is hosting Mailboxes and Public Folders.


Note that Internet Information Server is needed on a Mailbox Server role in order to implement the Role Based Access Control model (RBAC) even if no client is accessing the Mailbox Server directly.


As I mentioned, Storage Groups no longer exist in Exchange Server 2010, but mailboxes are still stored in databases, just like in Exchange Server 2007. Although rumors have been circulating for more than 10 years that the database engine used in Exchange Server will be replaced by a SQL Server engine, it has not happened yet. Just as in earlier versions of Exchange Server, the Extensible Storage Engine (ESE) is still being used, although major changes have been made to the database and the database schema.


By default, the first database on a server will be installed in the directory:


C:\Program Files\ Microsoft\Exchange Server\V14\Mailbox\Mailbox Database <<identifier>>





Figure 6. The default location for the Mailbox Databases and the log files.


The <<identifier>> is a unique number to make sure that the Mailbox Database name is unique within the Exchange organization.


It is a best practice, from both a performance and a recovery perspective, to place the database and the accompanying log files on a dedicated disk. This disk can be on a Fiber Channel SAN, an iSCSI SAN, or on a Direct Attached Storage (DAS) solution. Whilst it was a design goal to limit the amount of disk I/O to a level that both the database and the log files could be installed on a 1TB SATA disk, this is only an option if Database Copies are configured and you have at least two copies of the Mailbox Database, in order to avoid a single point of failure.


6.2 Client Access Server role


The Client Access Server XE "Client Access Server" role offers access to the mailboxes for all available protocols. In Exchange Server 2003, Microsoft introduced the concept of “front-end” and “back-end” servers, and the Client Access Server role is comparable to an Exchange Server 2003 front-end server.


All clients connect to the Client Access Server and, after authentication, the requests are proxied to the appropriate Mailbox Server. Communication between the client and the Client Access Server is via the normal protocols (HTTP, IMAP4, POP3 and MAPI), and communication between the Client Access Server and the Mailbox Server is via Remote Procedure Calls (RPC).


The following functionality is provided by the Exchange Server 2010 Client Access Server:


· HTTP for Outlook Web App.


· Outlook Anywhere (formerly known as RPC/HTTP) for Outlook 2003, Outlook 2007 and Outlook 2010.


· ActiveSync for (Windows Mobile) PDA’s.


· Internet protocols POP3 and IMAP4.


· MAPI on the Middle Tier (MoMT).


· Availability Service, Autodiscover and Exchange Web Services. These services are offered to Outlook 2007 clients and provide free/busy information, automatic configuration of the Outlook 2007 and Outlook 2010 client, the Offline Address Book downloads and Out-of-Office functionality.


Note. SMTP Services are not offered by the Client Access Server. All SMTP Services are handled by the Hub Transport Server.


At least one Client Access Server is needed for each Mailbox Server in an Active Directory site, as well as a fast connection is between the Client Access Server and the Mailbox Server. The Client Access Server also needs a fast connection to a Global Catalog Server.


The Client Access Server should be deployed on the internal network and NOT in the network’s Demilitarized Zone (DMZ). In order to access a Client Access Server from the Internet, a Microsoft Internet Security and Acceleration (ISA) Server should be installed in the DMZ. All necessary Exchange services should be “published” to the Internet, on this ISA Server.









Figure 7. The Client Access Server is responsible for providing access to (Internet) clients. The ISA Server is not in this picture.


6.3 Hub Transport Server role


The Hub Transport Server role is responsible for routing messaging not only between the Internet and the Exchange organization, but also between Exchange servers within your organization.





Figure 8. The Hub Transport Server is responsible for routing all messages


All messages are always routed via the Hub Transport Server role, even if the source and the destination mailbox are on the same server and even if the source and the destination mailbox are in the same Mailbox Database. For example in Figure 8:


· Step 1: A message is sent to the Hub Transport Server


· Step 2: A recipient on the same server as the sender means the message is sent back


· Step 3: When the recipient is on another mailbox server, the message is routed to the appropriate Hub Transport Server. This is then followed by…


· …Step 4: The second Hub Transport Server delivers the message to the Mailbox Server of the recipient


The reason for routing all messages through the Hub Transport Server is simply compliancy. Using the Hub Transport Server, it is possible to track all messaging flowing through the Exchange organization and to take appropriate action if needed (legal requirements, HIPAA, Sarbanes-Oxley etc.). On the Hub Transport Server the following agents can be configured for compliancy purposes:


· Transport Rule agents – using Transport Rules, all kinds of actions can be applied to messages according to the Rule’s filter or conditions. Rules can be applied to internal messages, external messages or both;


· Journaling agents – using the journaling agent, it is possible to save a copy of every message sent or received by a particular recipient.


Since a Mailbox Server does not deliver any messages, every Mailbox Server in an Active Directory site requires a Hub Transport Server in that site. The Hub Transport Server also needs a fast connection to a Global Catalog server for querying Active Directory. This Global Catalog server should be in the same Active Directory site as the Hub Transport Server.


When a message has an external destination, i.e. a recipient on the Internet, the message is sent from the Hub Transport Server to the ‘outside world’. This may be via an Exchange Server 2010 Edge Transport Server in the DMZ, but the Hub Transport Server can also deliver messages directly to the Internet.


Optionally, the Hub Transport Server can be configured to deal with anti-spam and anti-virus functions. The anti-spam services are not enabled on a Hub Transport Server by default, since this service is intended to be run on an Edge Transport Service in the DMZ. Microsoft has supplied a script on every Hub Transport Server that can be used to enable their anti-spam services if necessary.


Anti-virus services can be achieved by installing the Microsoft Forefront for Exchange software. The anti-virus software on the Hub Transport Server will scan inbound and outbound SMTP traffic, whereas anti-virus software on the Mailbox Server will scan the contents of a Mailbox Database, providing a double layer of security.


6.4 Edge Server role


The Edge Server role was introduced with Exchange Server 2007, and provides an extra layer of message hygiene. The Edge Transport Server role is typically installed as an SMTP gateway in the network's “Demilitarized Zone” or DMZ. Messages from the Internet are delivered to the Edge Transport Server role and, after anti-spam and anti-virus services, the messages are forwarded to a Hub Transport Server on the internal network.





Figure 9. The Edge Transport Server is installed between the Internet and the Hub Transport Server.


The Edge Transport Server can also provide the following services:


· Edge Transport Rules – like the Transport Rules on the Hub Transport Server, these rules can also control the flow of messages that are sent to, or received from the Internet when they meet a certain condition.


· Address rewriting – with address rewriting, the SMTP address of messages sent to or received from the Internet can be changed. This can be useful for hiding internal domains, for example after a merger of two companies, but before one Active Directory and Exchange organization is created.


The Edge Transport Server is installed in the DMZ and cannot be a member of the company’s internal Active Directory and Exchange Server 2010 organization. The Edge Transport Server uses the Active Directory Lightweight Directory Services (AD LDS) to store all information. In previous versions of Windows this service was called Active Directory Application Mode (ADAM). Basic information regarding the Exchange infrastructure is stored in the AD LDS, like the recipients and the Hub Transport Server which the Edge Transport Server is sending its messages to.


To keep the AD LDS database up to date, a synchronization feature called EdgeSync is used, which pushes information from the Hub Transport Server to the Edge Transport Server at regular intervals.


6.5 Unified Messaging Server role


The Exchange Server 2010 Unified Messaging Server role combines the mailbox database and both voice messages and e-mail messages into one store. Using the Unified Messaging Server role it is possible to access all messages in the mailbox using either a telephone or a computer.


The phone system can be an IP based system or a “classical” analog PBX system, although in the latter case, a special Unified Messaging IP Gateway is needed to connect the two.





Figure 10. Overview of the Unified Messaging Infrastructure.


The Unified Messaging Server role provides users with the following features:


· Call Answering – this feature acts as an answering machine. When somebody cannot answer the phone, a personal message can be played after which a caller can leave a message. The message will be recorded and sent to the recipient’s mailbox as an .mp3 file.


· Subscriber Access – sometimes referred to as “Outlook Voice Access”. Using Subscriber Access. users can access their mailbox using a normal phone line and listen to their voicemail messages. It is also possible to access regular mailbox items like messages and calendar items, and even reschedule appointments in the calendar.


· Auto Attendant – using the Auto Attendant, it is possible to create a custom menu in the Unified Messaging system using voice prompts. A caller can use either the telephone keypad or his or her voice to navigate through the menu.


The Unified Messaging service installed on the Unified Messaging Server role works closely with the Microsoft Exchange Speech Engine Service. This Speech Engine Service provides the following services:


· Dual Tone Multi Frequency (DTMF) also referred to as the touchtone (the beeps you hear when dialing a phone number or accessing a menu).


· Automatic Speech Recognition.


· Text-to-Speech service that’s responsible for reading mailbox items and reading the voice menus.


The Unified Messaging Server role should be installed in an Active Directory site together with a Hub Transport Server, since this latter server is responsible for routing messaging to the Mailbox Servers. It also should have a fast connection to a Global Catalog server. If possible, the Mailbox Server role should be installed as close as possible to the Unified Messaging Server role, preferably in the same site and with a decent network connection.


7 Summary


Exchange Server 2010 is the new Messaging and Collaboration platform from Microsoft, and it has a lot of new, compelling features. The new High Availability, management and compliancy features make Exchange Server 2010 a very interesting product for the Exchange administrator. In fact, the new features in Exchange Server 2010 will generally result in less complexity, which is always a good thing!




Under The Hood: What's changed?


· By far the most important change with respect to Exchange Server 2007 is the new Database Availability Group. This will allow you to create multiple copies of an Exchange Server database within your organization, and you are no longer bound to a specific site (like in Exchange Server 2007), but can now stretch across multiple sites. Microsoft has also successfully transformed Cluster Continuous Replication and Stand-by Continuous Replication into a new ‘Continuous Availability’ technology.


· While on the topic of simplifying, a lot of SysAdmins were having difficulties with the Windows Server fail-over clustering, so Microsoft has simply ‘removed’ this from the product. The components are still there, but they are now managed using the Exchange Management Console or Exchange Management Shell.


· With the new Personal Archive ability, a user can now have a secondary mailbox, acting as a personal archive - this really is a .PST killer! You now have the ability to import all the users’ .PST files and store them in the Personal Archive, and using retention policies you can move data from the primary mailbox to the archive automatically, to keep the primary mailbox at an acceptable size, without any hassle.


· To deal with ever-growing storage requirements, Microsoft also made considerable changes to the underlying database system. All you will need to store your database and log files with Exchange Server 2010 is a 2 TB SATA (or other Direct Attached Storage) disk. As long as you have multiple copies of the database, you’re safe! And the maximum supported database size? That has improved from 200 GB (in an Exchange Server 2007 CCR environment) to 2 TB (in a multiple database copy Exchange Server 2010 environment). If you haven’t yet considered what your business case will look like when upgrading to Exchange Server 2010, bear in mind that this will truly safe a tremendous amount of storage cost - and that’s not marketing talk!


· Installing Exchange 2010 is not at all difficult, and configuring a Database Availability Group with multiple copies of the Mailbox Databases is just a click of the mouse (you only have to be a little careful when creating multi-site DAGs). Even installing Exchange Server 2010 into an existing Exchange Server 2003 or Exchange Server 2007 environment is not that hard! The only thing you have to be aware of is the additional namespace that shows up. Besides the standard namespace like webmail.contose.com and Autodiscover.contoso.com, a third namespace shows up in a coexistence environment: legacy.contoso.com. This is used when you have mailboxes still on the old (i.e. Exchange Server 2003 or Exchange Server 2007) platform in a mixed environment.


· Lastly, for a die-hard GUI administrator it might be painful to start managing an Exchange environment with the Exchange Management Shell. Basic management can be done with the graphical Exchange Management Console, but you really do have to use the Shell for the nitty-gritty configuration. The Shell is remarkably powerful, and it takes quite some getting used to, but with it you can do fine-grained management, and even create reports using features like output-to-HTML or save-to-.CSV file. Very neat!

Wednesday, January 30, 2013

Resolving Cisco Router/Switch Tftp Problems: Source IP Address - The 'IP TFTP Source-Interface' Command

 
Resolving Cisco Router/Switch Tftp Problems: Source IP Address - The 'IP TFTP Source-Interface' Command
 
   
When working with Cisco equipment that has multiple ip interfaces, a common problem engineers are faced with is trying to successfully tftp to or from the Cisco device. This issue is usually encountered when the Cisco device (router or multi-layer switch) uses a different source IP address which cannot reach our TFTP Server's IP address or is blocked due to access lists.
 
Luckily, there is a way around this problem, and it’s a simple one.
 
To ensure your Cisco router or multi-layer switch uses the correct interface during any tftp session, use the ip tftp source-interface command to specify the source-interface that will be used by the device.
 
The following example instructs our Cisco 3750 Layer 3 switch to use VLAN 5 interface as the source ip interface for all tftp sessions:
 
3750G-Stack(config)# ip tftp source-interface vlan 5
 
As shown below, VLAN 5 has IP address 192.168.131.1 assigned to it, therefore this IP address will be the source interface from now on:
 
  • 3750G-Stack# show ip interface brief
  • Interface IP-Address OK? Method Status Protocol
  • Vlan1 192.168.50.1 YES NVRAM up up
  • Vlan2 192.168.130.1 YES NVRAM up up
  • Vlan3 192.168.135.1 YES NVRAM up up
  • Vlan4 192.168.19.1 YES NVRAM up up
  • Vlan5 192.168.131.1 YES NVRAM up up
  • Vlan6 192.168.141.1 YES NVRAM up up
  • Vlan7 192.168.170.1 YES NVRAM up up
  • Vlan8 192.168.180.1 YES NVRAM up up
 
 

Cisco GRE and IPSec - GRE over IPSec - Selecting and Configuring GRE IPSec Tunnel or Transport Mode

 
Cisco GRE and IPSec - GRE over IPSec - Selecting and Configuring GRE IPSec Tunnel or Transport Mode
 
 
 
GRE Tunnels are very common amongst VPN implementations thanks to their simplicity and ease of configuration. With broadcasting and multicasting support, as opposed to pure IPSec VPNs, they tend to be the number one engineers' choice, especially when routing protocols are used amongst sites.
 
The problem with GRE is that it is an encapsulation protocol, which means that while it does a terrific job providing connectivity between sites, it does a terrible job encrypting the data being transferred between them. GRE is stateless, offering no flow control mechanisms (think of UDP). This is where the IPSec protocol comes into the picture.
 
IPSec’s objective is to provide security services for IP packets such as encrypting sensitive data, authentication, protection against replay and data confidentiality. IPSec is extensively covered in our IPSec protocol article.
 
IPSec can be used in conjunction with GRE to provide top-notch security encryption for our data, thereby providing a complete secure and flexible VPN solution. IPSec can operate in two different modes, Tunnel mode and Transport mode. Both of these modes are covered extensively in our Understanding VPN IPSec Tunnel Mode and IPSec Transport Mode article. Additionally, Cisco GRE Tunnel configuration is covered in our Configuring Cisco Point-to-Point GRE Tunnels. We highly recommend reading these articles before proceeding as it is a prerequisite for understanding the information covered here.
 
As with IPSec, when configuring GRE with IPSec there are two modes in which GRE IPSec can be configured, GRE IPSec Tunnel mode and GRE IPSec Transport mode.
 
This article examines the difference between GRE IPSec Tunnel and GRE IPSec Transport mode, and explains the packet structure differences along with the advantages and disadvantages of each mode.
 
GRE IPSec Tunnel Mode
 
With GRE IPSec tunnel mode, the whole GRE packet (which includes the original IP header packet), is encapsulated, encrypted and protected inside an IPSec packet. GRE over IPSec Tunnel mode provides additional security because no part of the GRE tunnel is exposed, however, there is a significant overhead added to the packet. This additional overhead decreases the usable free space for our payload (Original IP packet), that means possibly more fragmentation will occur when transmitting data over a GRE IPSec Tunnel VPN.
 
IPSec Tunnel mode is the default configuration option for both GRE and non-GRE IPSec VPNs. When configuring the IPSec transform set, no other configuration commands are required to enable tunnel mode:
 
  • R1(config)# crypto ipsec transform-set TS esp-3des esp-md5-hmac
 
Calculating GRE IPSec Tunnel Mode Overhead
 
Calculating the overhead will help us understand how much additional space GRE over IPSec in Tunnel mode requires and our effective usable space.
 
The packet structure below shows an example of a GRE over IPSec in Tunnel mode:
 
Two important points to keep in mind when calculating the overhead:
 
Depending on the encryption algorithm used in the crypto transform set, the Initialization Vector (IV) shown could be 8 or 16 bytes long. For example DES or 3DES introduces an 8-byte IV field, where as AES introduces a 16-byte IV field. In our example, we are using 3DES encryption, therefore producing a 8-byte IV field.
 
The ESP Trailer will usually vary in size. Its job is to ensure that the Pad Length, Next Header fields (both 1-byte long and contained within the ESP Trailer) & ESP Auth.Trailer are aligned on a 4-byte boundary. This means the total number of bytes, when adding the three fields together, must be a multiple of 4.
 
Following is the calculated overhead:
 
ESP Overhead: 20 (IP Hdr) + 8 (ESP Hdr) + 8 (IV) + 4 (ESP Trailer) + 12 (ESP Auth) = 52 Bytes
 
Note: ESP Trailer has been calculated as 4 bytes as per above note.
 
GRE Overhead: 20 (GRE IP Hdr) + 4 (GRE) = 24 Bytes
 
Total Overhead: 52 + 24 = 76 Bytes
 
GRE IPSec Transport Mode
 
With GRE IPSec transport mode, the GRE packet is encapsulated and encrypted inside the IPSec packet, however, the GRE IP Header is placed at the front. This effectively exposes the GRE IP Header as it is not encrypted the same way it is in Tunnel mode.
 
IPSec Transport mode is not used by default configuration and must be configured using the following command under the IPSec transform set:
 
R1(config)# crypto ipsec transform-set TS esp-3des esp-md5-hmac
R1(cfg-crypto-trans)# mode transport
 
 
 
 
 
GRE IPSec transport mode does have a few implementation restrictions. It is not possible to use GRE IPSec transport mode if the crypto tunnel transits a device using Network Address Translation (NAT) or Port Address Translation (PAT). In such cases, Tunnel mode must be used.
 
Finally, if the GRE tunnel endpoints and Crypto tunnel endpoints are different, GRE IPSec transport mode cannot be used.
 
These limitations seriously restrict the use and implementation of the transport mode in a WAN network environment.
 
Calculating GRE IPSec Transport Mode Overhead
 
Calculating the overhead will help us understand how much space GRE over IPSec in Transport mode uses and our effective usable space.
 
 
The packet structure below shows an example of GRE over IPSec in transport mode:
 
 
 
 
 
Again, two important points that must kept in mind when calculating the overhead:
 
Depending on the encryption algorithm used in the crypto transform set, the Initialization Vector (IV) shown could be 8 or 16 bytes long. For example DES or 3DES introduces an 8-byte IV field, where as AES introduces a 16-byte IV field. In our example, we are using 3DES encryption, therefore producing a 8-byte IV field.
 
The ESP Trailer will usually vary in size. Its job is to ensure that the Pad Length, Next Header fields (both 1-byte long and contained within the ESP Trailer) & ESP Auth.Trailer are aligned on a 4-byte boundary. This means the total number of bytes, when adding the three fields together, must be a multiple of 4.
 
Following is the calculated overhead:
 
ESP Overhead: 20 (IP Hrd) + 8 (ESP Hdr) + 8 (IV) + 4 (ESP Trailer) + 12 (ESP Auth) = 52 Bytes
 
Note: ESP Trailer has been calculated as 4 bytes as per above note.
 
GRE Overhead: 4 (GRE) = 4 Bytes
 
Total Overhead: 52 + 4 = 56 Bytes
 
It is evident that GRE IPSec Transport mode saves approximately 20 bytes per packet overhead. This might save a moderate amount of bandwidth on a WAN link, however, there is no significant increase in CPU performance by using this mode.
 
Conclusion
 
When comparing GRE over IPSec tunnel and GRE over IPSec transport mode, there are significant differences that cannot be ignored.
 
If the GRE tunnels and crypto endpoints are not the same (IP address wise), transport mode in definitely not an option.
 
If packets traverse a device (router) where NAT or PAT is used then again, transport mode cannot be used.
 
On the other hand, tunnel mode seems to pay-off its 20-byte additional overhead by being flexible enough to be used in any type of WAN environment and offering increased protection by encrypting the GRE IP Header inside the ESP packet.
 
Taking in consideration the small additional CPU load the tunnel mode produces and advantages it offers, we don’t believe it’s a coincidence Cisco has selected this mode in IPSec’s default configuration.
 

Configuring Point-to-Point GRE VPN Tunnels - Unprotected GRE & Protected GRE over IPSec Tunnels

 
Configuring Point-to-Point GRE VPN Tunnels - Unprotected GRE & Protected GRE over IPSec Tunnels
 
 
 
Generic Routing Encapsulation (GRE) is a tunneling protocol developed by Cisco that allows the encapsulation of a wide variety of network layer protocols inside point-to-point links.
 
A GRE tunnel is used when packets need to be sent from one network to another over the Internet or an insecure network. With GRE, a virtual tunnel is created between the two endpoints (Cisco routers) and packets are sent through the GRE tunnel.
 
It is important to note that packets travelling inside a GRE tunnel are not encrypted as GRE does not encrypt the tunnel but encapsulates it with a GRE header. If data protection is required, IPSec must be configured to provide data confidentiality – this is when a GRE tunnel is transformed into a secure VPN GRE tunnel.
 
The diagram below shows the encapsulation procedure of a simple - unprotected GRE packet as it traversers the router and enters the tunnel interface:
 
 
 
While many might think a GRE IPSec tunnel between two routers is similar to a site to site IPSec VPN (crypto), it is not. A major difference is that GRE tunnels allow multicast packets to traverse the tunnel whereas IPSec VPN does not support multicast packets. In large networks where routing protocols such as OSPF, EIGRP are necessary, GRE tunnels are your best bet. For this reason, plus the fact that GRE tunnels are much easier to configure, engineers prefer to use GRE rather than IPSec VPN.
 
This article will explain how to create simple (unprotected) and secure (IPSec encrypted) GRE tunnels between endpoints. We explain all the necessary steps to create and verify the GRE tunnel (unprotected and protected) and configure routing between the two networks.
 
 
 
 
Creating a Cisco GRE Tunnel
 
GRE tunnel uses a ‘tunnel’ interface – a logical interface configured on the router with an IP address where packets are encapsulated and decapsulated as they enter or exit the GRE tunnel.
 
First step is to create our tunnel interface on R1:
 
  • R1(config)# interface Tunnel0
  •  
  • R1(config-if)# ip address 172.16.0.1 255.255.255.0
  •  
  • R1(config-if)# ip mtu 1400
  •  
  • R1(config-if)# ip tcp adjust-mss 1360
  •  
  • R1(config-if)# tunnel source 1.1.1.10
  •  
  • R1(config-if)# tunnel destination 2.2.2.10
 
All Tunnel interfaces of participating routers must always be configured with an IP address that is not used anywhere else in the network. Each Tunnel interface is assigned an IP address within the same network as the other Tunnel interfaces.
 
In our example, both Tunnel interfaces are part of the 172.16.0.0/24 network.
 
Since GRE is an encapsulating protocol, we adjust the maximum transfer unit (mtu) to 1400 bytes and maximum segment size (mss) to 1360 bytes. Because most transport MTUs are 1500 bytes and we have an added overhead because of GRE, we must reduce the MTU to account for the extra overhead. A setting of 1400 is a common practice and will ensure unnecessary packet fragmentation is kept to a minimum.
 
Closing, we define the Tunnel source, which is R1’s public IP address, and destination – R2’s public IP address
 
As soon as we complete R1’s configuration, the router will confirm the creation of the tunnel and inform about its status:
 
R1#
*May 4 21:30:22.971: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel0, changed state to up
 
Since the Tunnel 0 interface is a logical interface it will remain up even if there is no GRE tunnel configured or connected at the other end.
 
Next, we must create the Tunnel 0 interface on R2:
 
  • R2(config)# interface Tunnel0
  •  
  • R2(config-if)# ip address 172.16.0.2 255.255.255.0
  •  
  • R2(config-if)# ip mtu 1400
  •  
  • R2(config-if)# ip tcp adjust-mss 1360
  •  
  • R2(config-if)# tunnel source 2.2.2.10
  •  
  • R2(config-if)# tunnel destination 1.1.1.10
 
R2’s Tunnel interface is configured with the appropriate tunnel source and destination IP address. As with R1, R2 router will inform us that the Tunnel0 interface is up:
 
R2#
*May 4 21:32:54.927: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel0, changed state to up
Routing Networks Through the GRE Tunnel
 
At this point, both tunnel endpoints are ready and can ‘see’ each other. An icmp echo from one end will confirm this:
 
R1# ping 172.16.0.2
 
Type escape sequence to abort.
 
Sending 5, 100-byte ICMP Echos to 172.16.0.2, timeout is 2 seconds:
 
!!!!!
 
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
 
R1#
 
Again, this result means that the two tunnel endpoints can see each other. Workstations on either network will still not be able to reach the other side unless a static route is placed on each endpoint:
 
R1(config)# ip route 192.168.2.0 255.255.255.0 172.16.0.2
 
On R1 we add a static route to the remote network 192.168.2.0/24 via 172.16.0.2 which is the other end of our GRE Tunnel. When R1 receives a packet for 192.168.2.0 network, it now knows the next hop is 172.16.0.2 and therefore will send it through the tunnel.
 
The same configuration must be repeated for R2:
 
R2(config)# ip route 192.168.1.0 255.255.255.0 172.16.0.1
 
Now both networks are able to freely communicate with each over the GRE Tunnel.
 
 
Securing the GRE Tunnel with IPSec
 
As mentioned earlier, GRE is an encapsulation protocol and does not perform any encryption. Creating a point-to-point GRE tunnel without any encryption is extremely risky as sensitive data can easily be extracted from the tunnel and viewed by others.
 
For this purpose, we use IPSec to add an encryption layer and secure the GRE tunnel. This provides us with the necessary military-grade encryption and peace of mind. Our example below covers GRE IPSec Tunnel mode.
 
Configuring IPSec Encryption for GRE Tunnel (GRE over IPSec)
 
IPSec encryption involves two steps for each router. These steps are:
 
(1) Configure ISAKMP (ISAKMP Phase 1)
 
(2) Configure IPSec (ISAKMP Phase 2)
Configure ISAKMP (IKE) - (ISAKMP Phase 1)
 
IKE exists only to establish SAs (Security Association) for IPsec. Before it can do this, IKE must negotiate an SA (an ISAKMP SA) relationship with the peer.
 
To begin, we’ll start working on R1.
 
First step is to configure an ISAKMP Phase 1 policy:
 
R1(config)# crypto isakmp policy 1
 
R1(config-isakmp)# encr 3des
 
R1(config-isakmp)# hash md5
 
R1(config-isakmp)# authentication pre-share
 
R1(config-isakmp)# group 2
 
R1(config-isakmp)# lifetime 86400
 
The above commands define the following (in listed order):
 
3DES - The encryption method to be used for Phase 1.
 
MD5 - The hashing algorithm
 
Pre-share - Use Pre-shared key as the authentication method
 
Group 2 - Diffie-Hellman group to be used
 
86400 – Session key lifetime. Expressed in either kilobytes (after x-amount of traffic, change the key) or seconds. Value set is the default value.
 
Next we are going to define a pre shared key for authentication with R1's peer, 2.2.2.10:
 
R1(config)# crypto isakmp key firewallcx address 2.2.2.10
 
The peer’s pre shared key is set to firewallcx. This key will be used for allISAKMP negotiations with peer 2.2.2.10 (R2).
Create IPSec Transform (ISAKMP Phase 2 policy)
 
Now we need to create the transform set used to protect our data. We’ve named this TS:
 
R1(config)# crypto ipsec transform-set TS esp-3des esp-md5-hmac
R1(cfg-crypto-trans)# mode transport
 
The above commands defines the following:
 
- ESP-3DES - Encryption method
 
- MD5 - Hashing algorithm
 
- Set IPSec to transport mode
 
Finally, we create an IPSec profile to connect the previously defined ISAKMP and IPSec configuration together. We’ve named our IPSec profile protect-gre:
 
  • R1(config)# crypto ipsec profile protect-gre
  •  
  • R1(ipsec-profile)# set security-association lifetime seconds 86400
  •  
  • R1(ipsec-profile)# set transform-set TS
  •  
  • We are ready to apply the IPSec encryption to the Tunnel interface:
  •  
  • R1(config)# interface Tunnel 0
  • R1(config-if)# tunnel protection ipsec profile protect-gre
  •  
  • Now it's time to apply the same configuration on R2:
  •  
  • R2(config)# crypto isakmp policy 1
  •  
  • R2(config-isakmp)# encr 3des
  •  
  • R2(config-isakmp)# hash md5
  •  
  • R2(config-isakmp)# authentication pre-share
  •  
  • R2(config-isakmp)# group 2
  •  
  • R2(config-isakmp)# lifetime 86400
  •  
  •  
  •  
  • R2(config)# crypto isakmp key firewallcx address 1.1.1.10
  •  
  • R2(config)# crypto ipsec transform-set TS esp-3des esp-md5-hmac
  •  
  • R2(cfg-crypto-trans)# mode transport
  •  
  • R2(config)# crypto ipsec profile protect-gre
  •  
  • R2(ipsec-profile)# set security-association lifetime seconds 86400
  •  
  • R2(ipsec-profile)# set transform-set TS
  •  
  • R2(config)# interface Tunnel 0
  •  
  • R2(config-if)# tunnel protection ipsec profile protect-gre
  •  
  •  
  • Verifying the GRE over IPSec Tunnel
 
Finally, our tunnel has been encrypted with IPSec, providing us with the much needed security layer. To test and verify this, all that is required is to ping the other end and force the VPN IPSec tunnel to come up and start encrypting/decrypting our data:
 
R1# ping 192.168.2.1
 
Type escape sequence to abort.
 
Sending 5, 100-byte ICMP Echos to 192.168.2.1, timeout is 2 seconds:
 
!!!!!
 
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/3/4 ms
 
Using the show crypto session command, we can quickly verify the encryption is in place and doing its work:
 
R1# show crypto session
 
Crypto session current status
 
Interface: Tunnel0
 
Session status: UP-ACTIVE
 
Peer: 2.2.2.10 port 500
 
IKE SA: local 1.1.1.10/500 remote 2.2.2.10/500 Active
 
IPSEC FLOW: permit 47 host 1.1.1.10 host 2.2.2.10
 
Active SAs: 2, origin: crypto map
 

Configuring Site to Site IPSec VPN Tunnel Between Cisco Routers

 
Configuring Site to Site IPSec VPN Tunnel Between Cisco Routers
 
 
 
Site-to-Site IPSec VPN Tunnels are used to allow the secure transmission of data, voice and video between two sites (e.g offices or branches). The VPN tunnel is created over the Internet public network and encrypted using a number of advanced encryption algorithms to provide confidentiality of the data transmitted between the two sites.
 
This article will show how to setup and configure two Cisco routers to create a permanent secure site-to-site VPN tunnel over the Internet, using the IP Security (IPSec) protocol.
 
ISAKMP (Internet Security Association and Key Management Protocol) and IPSec are essential to building and encrypting the VPN tunnel. ISAKMP, also called IKE (Internet Key Exchange), is the negotiation protocol that allows two hosts to agree on how to build an IPsec security association. ISAKMP negotiation consists of two phases: Phase 1 and Phase 2.
 
Phase 1 creates the first tunnel, which protects later ISAKMP negotiation messages. Phase 2 creates the tunnel that protects data. IPSec then comes into play to encrypt the data using encryption algorithms and provides authentication, encryption and anti-replay services.
IPSec VPN Requirements
 
To help make this an easy-to-follow exercise, we have split it into two steps that are required to get the Site-to-Site IPSec VPN Tunnel to work.
 
These steps are:
 
(1) Configure ISAKMP (ISAKMP Phase 1)
 
(2) Configure IPSec (ISAKMP Phase 2, ACLs, Crypto MAP)
 
Our example setup is between two branches of a small company, these are Site 1 and Site 2. Both the branch routers connect to the Internet and have a static IP Address assigned by their ISP as shown on the diagram:
 
 
 
Site 1 is configured with an internal network of 10.10.10.0/24, while Site 2 is configured with network 20.20.20.0/24. The goal is to securely connect both LAN networks and allow full communication between them, without any restrictions.
Configure ISAKMP (IKE) - (ISAKMP Phase 1)
 
IKE exists only to establish SAs (Security Association) for IPsec. Before it can do this, IKE must negotiate an SA (an ISAKMP SA) relationship with the peer.
 
To begin, we’ll start working on the Site 1 router (R1).
 
First step is to configure an ISAKMP Phase 1 policy:
 
  • R1(config)# crypto isakmp policy 1
  •  
  • R1(config-isakmp)# encr 3des
  •  
  • R1(config-isakmp)# hash md5
  •  
  • R1(config-isakmp)# authentication pre-share
  •  
  • R1(config-isakmp)# group 2
  •  
  • R1(config-isakmp)# lifetime 86400

 
 
The above commands define the following (in listed order):
 
3DES - The encryption method to be used for Phase 1.
 
MD5 - The hashing algorithm
 
Pre-share - Use Pre-shared key as the authentication method
 
Group 2 - Diffie-Hellman group to be used
 
86400 – Session key lifetime. Expressed in either kilobytes (after x-amount of traffic, change the key) or seconds. Value set is the default value.
 
We should note that ISAKMP Phase 1 policy is defined globally. This means that if we have five different remote sites and configured five different ISAKMP Phase 1 policies (one for each remote router), when our router tries to negotiate a VPN tunnel with each site it will send all five policies and use the first match that is accepted by both ends.
 
Next we are going to define a pre shared key for authentication with our peer (R2 router) by using the following command:
 
  • R1(config)# crypto isakmp key firewallcx address 1.1.1.2
 
The peer’s pre shared key is set to firewallcx and its public IP Address is 1.1.1.2. Every time R1 tries to establish a VPN tunnel with R2 (1.1.1.2), this pre shared key will be used.
Configure IPSec
 
To configure IPSec we need to setup the following in order:
 
- Create extended ACL
 
- Create IPSec Transform
 
- Create Crypto Map
 
- Apply crypto map to the public interface
 
Let us examine each of the above steps.
Creating Extended ACL
 
Next step is to create an access-list and define the traffic we would like the router to pass through the VPN tunnel. In this example, it would be traffic from one network to the other, 10.10.10.0/24 to 20.20.20.0/24. Access-lists that define VPN traffic are sometimes called crypto access-list or interesting traffic access-list.
 
R1(config)# ip access-list extended VPN-TRAFFIC
R1(config-ext-nacl)# permit ip 10.10.10.0 0.0.0.255 20.20.20.0 0.0.0.255
Create IPSec Transform (ISAKMP Phase 2 policy)
 
Next step is to create the transform set used to protect our data. We’ve named this TS:
 
R1(config)# crypto ipsec transform-set TS esp-3des esp-md5-hmac
 
The above command defines the following:
 
- ESP-3DES - Encryption method
 
- MD5 - Hashing algorithm
Create Crypto Map
 
The Crypto map is the last step of our setup and connects the previously defined ISAKMP and IPSec configuration together:
 
  • R1(config)# crypto map CMAP 10 ipsec-isakmp
  •  
  • R1(config-crypto-map)# set peer 1.1.1.2
  •  
  • R1(config-crypto-map)# set transform-set TS
  •  
  • R1(config-crypto-map)# match address VPN-TRAFFIC
 
We’ve named our crypto map CMAP. The ipsec-isakmp tag tells the router that this crypto map is an IPsec crypto map. Although there is only one peer declared in this crypto map (1.1.1.2), it is possible to have multiple peers within a given crypto map.
 
 
Apply Crypto Map to the Public Interface
 
The final step is to apply the crypto map to the outgoing interface of the router. Here, the outgoing interface is FastEthernet 0/1.
 
  • R1(config)# interface FastEthernet0/1
  • R1(config- if)# crypto map CMAP
 
Note that you can assign only one crypto map to an interface.
 
As soon as we apply crypto map on the interface, we receive a message from the router that confirms isakmp is on: “ISAKMP is ON”.
 
At this point, we have completed the IPSec VPN configuration on the Site 1 router.
 
We now move to the Site 2 router to complete the VPN configuration. The settings for Router 2 are identical, with the only difference being the peer IP Addresses and access lists:
 
  • R2(config)# crypto isakmp policy 1
  •  
  • R2(config-isakmp)# encr 3des
  •  
  • R2(config-isakmp)# hash md5
  •  
  • R2(config-isakmp)# authentication pre-share
  •  
  • R2(config-isakmp)# group 2
  •  
  • R2(config-isakmp)# lifetime 86400
  •  
  • R2(config)# crypto isakmp key firewallcx address 1.1.1.1
  •  
  • R2(config)# ip access-list extended VPN-TRAFFIC
  •  
  • R2(config-ext-nacl)# permit ip 20.20.20.0 0.0.0.255 10.10.10.0 0.0.0.255
  •  
  • R2(config)# crypto ipsec transform-set TS esp-3des esp-md5-hmac
  •  
  • R2(config)# crypto map CMAP 10 ipsec-isakmp
  •  
  • R2(config-crypto-map)# set peer 1.1.1.1
  •  
  • R2(config-crypto-map)# set transform-set TS
  •  
  • R2(config-crypto-map)# match address VPN-TRAFFIC
  •  
  • R2(config)# interface FastEthernet0/1
  •  
  • R2(config- if)# crypto map CMAP
  •  
 
Bringing Up and Verifying the VPN Tunnel
 
At this point, we’ve completed our configuration and the VPN Tunnel is ready to be brought up. To initiate the VPN Tunnel, we need to force one packet to traverse the VPN and this can be achieved by pinging from one router to another:
 
R1# ping 20.20.20.1 source fastethernet0/0
 
Type escape sequence to abort.
 
Sending 5, 100-byte ICMP Echos to 20.20.20.1, timeout is 2 seconds:
 
Packet sent with a source address of 10.10.10.1
 
.!!!!
 
Success rate is 80 percent (4/5), round-trip min/avg/max = 44/47/48 ms
 
 
 
 
 
The first ping received a timeout, but the rest received a reply, as expected. The time required to bring up the VPN Tunnel is sometimes slightly more than 2 seconds, causing the first ping to timeout.
 
To verify the VPN Tunnel, use the show crypto session command:
 
R1# show crypto session
 
Crypto session current status
 
Interface: FastEthernet0/1
 
Session status: UP-ACTIVE
 
Peer: 1.1.1.2 port 500
 
IKE SA: local 1.1.1.1/500 remote 1.1.1.2/500 Active
 
IPSEC FLOW: permit ip 10.10.10.0/255.255.255.0 20.20.20.0/255.255.255.0
 
Active SAs: 2, origin: crypto map
 
 
Network Address Translation (NAT) and IPSec VPN Tunnels
 
Network Address Translation (NAT) is probably configured to provide Internet access to internal hosts. When configuring a Site-to-Site VPN tunnel, it is imperative to instruct the router not to perform NAT (deny NAT) on packets destined to the remote VPN network.
 
This is easily done by inserting a deny statement at the beginning of the NAT access lists as shown below:
 
For Site 1’s router:
 
  • R1(config)# ip nat inside source list 100 interface fastethernet0/1 overload
  •  
  • R1(config)# access-list 100 remark -=[Define NAT Service]=-
  •  
  • R1(config)# access-list 100 deny ip 10.10.10.0 0.0.0.255 20.20.20.0 0.0.0.255
  •  
  • R1(config)# access-list 100 permit ip 10.10.10.0 0.0.0.255 any
  •  
  • R1(config)# access-list 100 remark
 
 
 
And Site 2’s router:
 
  • R2(config)# ip nat inside source list 100 interface fastethernet0/1 overload
  •  
  • R2(config)# access-list 100 remark -=[Define NAT Service]=-
  •  
  • R2(config)# access-list 100 deny ip 20.20.20.0 0.0.0.255 10.10.10.0 0.0.0.255
  •  
  • R2(config)# access-list 100 permit ip 20.20.20.0 0.0.0.255 any
  •  
  • R2(config)# access-list 100 remark

Configuring Static Route Tracking using IP SLA (Basic)

 
Configuring Static Route Tracking using IP SLA (Basic)
 
In today's network environment, redundancy is one of the most important aspects, whether its on the LAN side or on the WAN side. In this topic I will be covering WAN redundancy with multiple WAN links terminating on a single router.
 
The best and simplest way to achieve WAN redundancy on Cisco devices is to use Reliable Static backup routes with IP SLA tracking.
 
IP SLAs is a feature included in the Cisco IOS Software that can allow administrators the ability to Analyze IP Service Levels for IP applications and services. IP SLA's uses active traffic-monitoring technology to monitor continuous traffic on the network. This is a reliable method in measuring over head network performance. Cisco Routers provide IP SLA Responders that give accuracy of measured data across a network.
 
With IP SLAs, routers and switches perform periodic measurements. The number and type of available measurements are vast and in this article I will be covering just the ICMP ECHO feature. IP SLA in itself is a very big topic to cover J
 
Let us take an example of a basic redundant WAN link scenario as shown below:
 
 
 
In the above figure the Cisco device is connected to two WAN links ISP1 and ISP2. The most common setup that we use in day to day life is to have to default routes configured on the Cisco router pointing to the respective next hop IPs as shown below:
 
  • R1(config)# ip route 0.0.0.0 0.0.0.0 2.2.2.2
  • R1(config)# ip route 0.0.0.0 0.0.0.0 3.3.3.3 10
 
If you notice the Administrative Distance for the secondary route pointing to ISP2 is increased to 10 so that it becomes the backup link.
 
The above configuration with just two floating static routes partially accomplishes our requirement as it will work only in the scenario where the routers interfaces connected to the WAN link are in up/down or down/down status. But in a lot of situations we see that even though the links remain up but we are not able to reach the gateway, this usually happens when the issue is at the ISP side.
 
In such scenarios, IP SLAs becomes an engineer's best friend. With around six additional IOS commands we can have a more reliable automatic failover environment.
 
Using IP SLA the Cisco IOS gets the ability to use Internet Control Message Protocol (ICMP) pings to identify when a WAN link goes down at the remote end and hence allows the initiation of a backup connection from an alternative port. The Reliable Static Routing Backup using Object Tracking feature can ensure reliable backup in the case of several catastrophic events, such as Internet circuit failure or peer device failure.
 
IP SLA is configured to ping a target, such as a publicly routable IP address or a target inside the corporate network or your next-hop IP on the ISP's router. The pings are routed from the primary interface only. Following a sample configuration of IP SLA to generate icmp ping targeted at the ISP1s next-hop IP.

  • R1(config)# ip sla 1
  • R1(config)# icmp-echo 2.2.2.2 source-interface FastEthernet0/0
  • R1(config)# timeout 1000
  • R1(config)# threshold 2
  • R1(config)# frequency 3
  • R1(config)# ip sla schedule 1 life forever start-time now
 
Please note that the Cisco IP SLA commands have changed from IOS to IOS to know the exact command for IOS check the Cisco documentation. The above commands are for IOS 12.4(4)T, 15.(0)1M, and later releases.
 
The above configuration defines and starts an IP SLA probe.
 
The ICMP Echo probe sends an ICMP Echo packet to next-hop IP 2.2.2.2 every 3 seconds, as defined by the “frequency” parameter.
 
Timeout sets the amount of time (in milliseconds) for which the Cisco IOS IP SLAs operation waits for a response from its request packet.
 
Threshold sets the rising threshold that generates a reaction event and stores history information for the Cisco IOS IP SLAs operation.
 
After defining the IP SLA operation our next step is to define an object that tracks the SLA probe. This can be accomplished by using the IOS Track Object as shown below:
 
R1(config)# track 1 ip sla 1 reachability
 
The above command will track the state of the IP SLA operation. If there are no ping responses from the next-hop IP the track will go down and it will come up when the ip sla operation starts receiving ping response.
 
To verify the track status use the use the “show track” command as shown below:
 
  • R1# show track
  •  
  • Track 1
  • IP SLA 1 reachability
  • Reachability is Down
  • 1 change, last change 00:03:19
  • Latest operation return code: Unknown
 
The above output shows that the track status is down. Every IP SLAs operation maintains an operation return-code value. This return code is interpreted by the tracking process. The return code may return OK, OverThreshold, and several other return codes.
 
Different operations may have different return-code values, so only values common to all operation types are used. The below table shows the track states as per the IP SLA return code.
 
 
 
  • Tracking
  •  
  • Return Code
  •  
  • Track State
  •  
  •  
  • Reachability
  •  
  • OK or over threshold
  •  
  • (all other return codes)
  •  
  • Up
  •  
  • Down
  •  
 
The Last step in the IP SLA Reliable Static Route configuration is to add the “track” statement to the default routes pointing to the ISP routers as shown below:
 
  • R1(config)# ip route 0.0.0.0 0.0.0.0 2.2.2.2 track 1
  • R1(config)# ip route 0.0.0.0 0.0.0.0 3.3.3.3 10
 
The track number keyword and argument combination specifies that the static route will be installed only if the state of the configured track object is up. Hence if the track status is down the secondary route will be used to forward all the traffic.
 
 

How to Configure DHCP Server on a Cisco Router

 
How to Configure DHCP Server on a Cisco Router
 
 
Introduction
 
DHCP (Dynamic Host Configuration Protocol) is the protocol used by network devices (such as PCs, network printers, etc) to automatically obtain correct network parameters so they can access network and Internet resources such as IP Address, Default Gateway, Domain Name, DNS Servers and more.
 
A DHCP Server is considered necessary in today's networks. Devices usally found providing this service are Windows servers, routers and layer 3 switches.
 
This article describes how to configure basic DHCP parameters on a Cisco router, enabling it to act as a DHCP server for your network.
Example Scenario
 
For the sake of this article, suppose we have the network shown in the following diagram, for which we would like to enable the DHCP service on our Cisco router.
 
The router will act as a DHCP server for the 192.168.1.0/24 network. IP Addresses already assigned to our switch (192.168.1.2) and File Server (192.168.1.5) will be excluded from the DHCP pool, to ensure they are not given out to other hosts and cause an IP address conflict.
 
First step is to enable the DHCP service on our router, which by default is enabled.
 
 
 
First step is to enable the DHCP service on our router, which by default is enabled:
 
R1# configure terminal
R1(config)# service dhcp
 
Next step is to create the DHCP pool that defines the network of IP addresses that will be given out to the clients. Note that 'NET-POOL' is the name of the DHCP IP Pool we are creating:
 
  • R1(config)# ip dhcp pool NET-POOL
  • R1(dhcp-config)# network 192.168.1.0 255.255.255.0
 
This tells the router to issue IP addresses for the network 192.168.1.0, which translates to the range 192.168.1.1 - 192.168.1.254. We will have to exclude the IP addresses we want later on.
 
We now define the DHCP parameters that will be given to each client. These include the default gateway (default-router), dns servers, domain and lease period (days):
 
  • R1(dhcp-config)# default-router 192.168.1.1
  • R1(dhcp-config)# dns-server 192.168.1.5 195.170.0.1
  • R1(dhcp-config)# domain-name Firewall.cx
  • R1(dhcp-config)# lease 9

The 'domain-name' and 'lease' parameters are not essential and can be left out. By default, the lease time for an IP address is one day.
 
All we need now is to exclude the IP addresses we don't want our DHCP server giving out. Drop back to 'global configuration mode' and enter the following:
 
  • R1(config)# ip dhcp excluded-address 192.168.1.1 192.168.1.5
  • R1(config)# ip dhcp excluded-address 192.168.1.10
 
This excludes IP addresses 192.168.1.1 - 192.168.1.5 & 192.168.1.10. As you can see, there's an option to exclude a range of IP addresses or a specific address.
 
The above configuration is all you need to get the DHCP server running for your network. We'll provide a few more commands you can use to troubleshoot and ensure it's working correctly.
 
The following command will allow you to check which clients have been served by the DHCP:
 
  • R1# show ip dhcp binding
  • Bindings from all pools not associated with VRF:
  • IP address Client-ID/ Lease expiration Type
  • Hardware address/
  • User name
  • 192.168.1.6 0100.1e7a.c409 Jan 19 2009 03:06 PM Automatic
  • 192.168.1.7 0100.1e7a.c3c1 Jan 19 2009 09:00 PM Automatic
  • 192.168.1.8 0100.1ebe.923b Jan 19 2009 02:25 PM Automatic
  • 192.168.1.9 0100.1b53.5ccc Jan 19 2009 02:03 PM Automatic
  • 192.168.1.11 0100.1e7a.261d Jan 19 2009 07:52 PM Automatic
  • R1#
 
Notice that IP addresses 192.168.1.5 & 192.168.1.10 have not been assigned to the clients.
Article Summary
 
In this article we've covered how a Cisco router can be used as a basic DHCP server and the various options available. We also saw how you can obtain general information about the service. There are more options available with the DHCP service, however this basic article should cover most of your network needs.
 
Future DHCP articles will explore advanced options and debugging for more complex networks containing VLANs and IP Telephony.
 
If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.