Wednesday, December 21, 2011

Configuring FAST Search Server 2010 To Use SSL With A CA Certificate

I had the opportunity to configure a FAST Search Server 2010 deployment in a secure environment. Instructions for configuring SSL for FAST are fairly straight forward, however there were a few gotchas involved.

First, install FAST just as you normally would. Follow these instructions from Microsoft: Configure a stand-alone deployment or a multiple server deployment (FAST Search Server 2010 for SharePoint)
You want to use the FASTSearchCert.pfx self signed certificate that is generated by FAST when using the SecureFASTSearchConnector.ps1 for the first time. Be sure that the user that you use in the -username switch is the SAME user that is running the SharePoint Search Service that you configured when you created the Content SSA. This user also needs to be a member of the FASTSearchAdministrators local group on the FAST Admin and non-admin servers. This is very important!
Also, use the normal http settings for the Administrative Services and the Query Services for the time being. We want to make sure that our connections to FAST work with out incident BEFORE we complicate things by introducing the CA cert.

After you set up your environment, and you have confirmed that everything is working properly, no errors in the SharePoint 2010 event logs, it is time to complicate our deployment by adding the CA signed certificates. The instructions for setting up the use of SSL are a bit vague in places, so I will set down what I did to make everything work.

First things first... Check out the following site: Manage certificates (FAST Search Server 2010 for SharePoint)
As you can see you will need to obtain certificates, all signed by the same Certificate Authority, for any server that is being used. You can really complicate your instillation by changing the DNS alias that is hosting your FAST Search Administration and Resource Store IIS web sites. If you choose to do so, not recommended personally, a specific SSL cert needs to be created and signed by the same CA that is signing the rest of your server certificates then bound to those web sites.

Be sure to complete the section on Replacing the Query HTTPS certificate. It is very important that you have the port correctly configured to use the proper certificate.

One gotcha that I ran in to was that you do need to update the deployment.xml to reflect your usage of SSL for the Administrative services. Be sure to follow the instructions on the following page: Enable Administration Service over HTTPS (FAST Search Server 2010 for SharePoint) Step number four is the one that points you to change the config.xml file.
I like to do one last check here and attempt to access the secure URLs for my services. You should get 403 errors saying that directory browsing is not permitted. What you should not get is the page saying that there is a problem with the certificate. If you get this page, you need to go back to the instructions, and make sure that you have secured everything correctly and that the proper certificates are installed.

Now that everything on the FAST server is taken care of, go back to your SharePoint 2010 Central Administration server. The server certificates for this server should be installed as well as the root certificate for the CA. The certificate used for the SharePoint server needs to have been created specifically for the SharePoint server. You can't just export the server certificate from the FAST server and install it on the SharePoint server. It must be specifically for the SharePoint server created from the same CA as the FAST certificates.
Get in to Central Administration and find your Query Search Service Application properties. Update the Query, Administrative, and Resource Store Service Locations with their secure locations. This is fairly straightforward, to check if things are working correctly, simply click on the Query Search Service Application then click FAST Search Administration on the left. Go through each link on the FAST Administration page and confirm that there are no errors.

Next, open up a SharePoint PowerShell command window. Execute the following cmdlet: Ping-SPEnterpriseSearchContentService -hostName [FASTContentDistributor:PORT] where the FAST Content Distributor is the proper location for your Content Distributor. Don't forget the port number, it is 13391 if you used the default ports.
This handy dandy cmdlet will give you a listing of the installed Personal Certificates on the server will confirm witch one will successfully connect to your newly secured Content Distributor. Copy the thumbprint of the cert that connected and rerun the SecureFASTSearchConnector.ps1 script that you ran before to set up your Content SSA. This time, instead pointing to a specific certificate file, you will be using the thumbprint of your installed certificate, as shown in the instructions from Microsoft.

If all goes well you will get the magic words, Connection to contentdistributor [your content distributor site:PORT] successfully validated.
After that your communications between SharePoint and FAST Search Server will be conducted over SSL. Of course, if you add Content Distributors, or Query Service locations you WILL need to run through the steps of installing certificates and securing those sites just as we did above.

Sunday, December 11, 2011

Google Chrome, User Stylesheets, and Facebook People to Subscribe To Sidebar

I am an avid Facebook user. I really like it. To the point of near addiction... I need help, I really do.
Anyway, recently Facebook has allowed you to "subscribe" to people. What that means is that you can see what they are posting on their walls, but they won't see what you are posting on your wall. Nice for following athletes and other famous people. However, Facebook insists on putting advertisements up for people you should subscribe to. For some reason it kept wanting me to subscribe to various underwear, bikini, and Playboy models. I don't have any of those people as my friends, and I don't visit their web sites, so it isn't a cookie thing. Anyway, I wanted to get rid of it, because it was annoying.

Facebook will not allow you to remove this part of their page from your personal site, so how do you get rid of something like this??? The answer is personal browser stylesheets, or User Stylesheets.

So... What are User Stylesheets. Well, stylesheets are used on web pages to give the pages a uniform look and feel. One example is that you want all text in a table to be boldface. You set up your stylesheet to boldface the text in tables, then apply that stylesheet to all of your web pages. BOOM, all of the text in tables is boldface. Just that easy.
What does that have to do with what we are talking about? You can't change Facebook's stylesheet, so big whoup. Well, modern browsers use what are known as user sytlesheets, or a stylesheet that is installed on the user's computer that applies their stylesheet to all web pages browsed from that user profile. So, say you want all of your text to be red, regardless of what the web page has their set for. You can do that. You can also run specific Java scripts on the pages, to remove any bad words, or whatever.

I use Chrome for my primary browser, so first I needed to configure Chrome to use user stylesheets. That is easily done. All you do is change the Target property on the Chrome shortcut. You add "--enable-user-stylesheet" behind the chrome.exe in the text box. That's it! Chrome is now configured to use User Stylesheets.

I want to remove the People To Subscribe to section, so I have to figure out what that particular section's ID is. Chrome really helps developers out by including developer tool functionality. You just go to Settings then Tools then Developer Tools. I went through the code until I found the element I was looking for. It turns out that the element name is pagelet_ego_pane_w.
Thusly armed, I now need to write the line of code that will hide it forever more... Or until Facebook changes the element's ID... Anyway the code is simple just this: #pagelet_ego_pane_w { display: none }

The element found, and the code to hide it written, I need to add it to my User Stylesheet. I open Windows Explorer and find my user's AppData folder (this folder is hidden, so you will need to set Explorer to show all hidden folders). Once I am there I can go in to the Chrome user style sheet section (c:\Users\YOURUSERNAME\AppData\Local\Google\Chrome\User Data\Default\User StyleSheets\). I opened the custom.css file, and put my code in at the top. Saved the file and restarted Chrome. Tada!!! No more annoying Subscribe To sidebar!! Hooray!!

Friday, December 2, 2011

Authentication Types and Authentication Providers - SharePoint and IIS

I have been having an interesting discussion with my client that has nearly caused my head to explode. The discussion centers around Negotiate, Kerberos, NTLM, IIS, and SharePoint.

A little background first. The client wanted to set up a SharePoint site that could be accessed both by Kerberos and NTLM. SharePoint 2010 only allows for a singe Windows Authentication per zone, so you need to set up two zones for a single web application. One that is configured for Kerberos, and one that is configured for NTLM. Pretty straight forward stuff.
I created a Web Application diagram detailing out the need for these two zones, and was called out by my client. He remarked that if you set your Web Application up for "Negotiate" you can use both Kerberos and NTLM, so the extra zone is not needed.
Wait... What?? Negotiate uses both NTLM and Kerberos?? Eh?? My world had just turned upside down. Kerberos and NTLM are mutually exclusive authentication methods, and can not be mixed together. You are using NTLM, or you are using Kerberos, there is nothing in between. This is evident in Central Administration by having to select either NTLM or Kerberos as your Windows Authentication.

So what is NTLM and Kerberos? Why don't they work together? Well, Kerberos is a token based authentication method that requires all parties involved to be registered with Active Directory, and trusted to use Kerberos. I like to call this connection to AD the Kerberos chain. Why? Well, because everything is registered and trusted in AD, special things can happen. Once a user is authenticated, servers and applications can use that user credential for many authentications. Thus Kerberos can make multiple "hops" without having to re-authenticate the user. It makes a "chain" of authentication.
It works like this... The user attempts to connect to a web site that uses a database back end. The web site is secured with Kerberos. The user is first prompted for their credentials, and authenticated by AD. The user is issued a "token" by the authenticating Domain Controller, called the Kerberos Domain Controller (KDC).
The web site then makes a call to the back end database. The database asks who wants the data, and the web server responds with the user's token. The database says well, the user is trusted by the KDC, the Web server is trusted by the KDC, the web site is trusted by the KDC, I'm trusted by the KDC, and since I trust the KDC, I will send the user the data requested. Everything is chained together by AD, thus the Kerberos chain.
As you can imagine, Kerberos takes some configuration... This can be the tricky part, because all pieces of the puzzle need to be included in AD. That means that the SQL instances and DNS aliases used in IIS need to be registered with Security Principal Names. All servers that host the SQL and IIS instances need to be trusted for delegation in AD. And all users involved need to have their SPNs as well. By default, users get an SPN when they are created in Active Directory Users and Computers, but servers are not trusted for delegation by default.
Here is a GREAT whitepaper on how to configure Kerberos for SharePoint.
NTLM is a simple challenge response authentication method that only requires the user to be registered in AD. Because of this, NTLM can not make the multiple security hops that Kerberos can make. So for each additional hop that is required by the application, the user will need to re-authenticate, or some other trusted credential needs to be used.

Getting back to the story... As you can imagine, the situation set off a flurry of emails, me trying to explain that you cannot do such a thing, and the client insisting on he has done this impossibility in his test environment. So, like my Physics professors told me many years ago, when things don't make sense, go back to the scientific method. I didn't do that...
Instead of having my client describe his environment in excruciating detail, I tried to come up with ways in which he could be thinking NTLM and Kerberos could be working in concert. I asked him if he was talking about his IIS settings, which could indeed be set to be open to using Kerberos and NTLM.

Back in the good old days of the IIS Metabase, there was a property called NtAuthenticationProviders. You could use this property to explicitly state which Windows authentication method you wanted to use. In the IIS 6 days, if you wanted to use Kerberos, you needed to make sure that the "Negotiate" value was set. If you wanted to use both, you set the property to Negotiate,NTLM. (If you want to know more about setting this property in IIS 5 and 6 click the link.)
In IIS 7 Microsoft changed IIS fundamentally. Instead of the proprietary Metabase, an XML file is used for configuration. This file is found by default in the ApplicationHost.config file at %SYSTEMROOT%\system32\inetsrv\config.
In that file there is a property called windowsAuthentication, and under that property is the providers property. In that area you can see the values for Negotiate and NTLM. (If you want to know more about setting this property in IIS 7 and 7.5 click the link.)

This is where things can get tricky... SharePoint is nothing more than an IIS hosted .NET application. IIS is a service on the Windows server platform. Windows, by default, uses Kerberos as its authentication method. Therefore, if your Kerberos chain is configured (all servers trusted for delegation and SPNs for all DNS aliases, SQL instances, as well as users) IIS will authenticate the user using Kerberos DESPITE the SharePoint web application being set for NTLM. User impersonation will not be used, but the initial authentication will be Kerberos. A check of the Windows Security logs will confirm this.

So, knowing all that I thought that perhaps my client had his SharePoint web application authentication method set for NTLM, but was seeing Kerberos in his security logs. Not the case. A screen shot later proved that he was indeed configured to use Kerberos.

So now I finally get smart and start to apply the scientific method. I ask for a complete description of their environment and what evidence he had to say that he was being authenticated via NTLM. And the truth finally emerged.
He was creating his web application in his intranet. He would then VPN in to the network from an outside network, and then connect to the site with his browser. He was prompted for credentials in a standard NT challenge response window. It was this window that he was calling NTLM. He thought that Kerberos needed to include the client's computer in the Kerberos chain, and that Kerberos could only be configured if the user's browser was configured to pass the user's logged on credentials. This, of course is false. Only the user needs to be authenticated, and it is the SECOND computer in the chain that needs to be trusted for delegation. The first hop you get with just the password. Because of this, ANY user can be used as the impersonated user, as long as they are registered with AD. It is how you can change the logged on user in SharePoint. Because his Kerberos chain is in tact, Kerberos can be successfully used as the authentication method.

And with that, the mystery was solved and all was well with the world. A second zone was not needed for NTLM, because Kerberos could be used. Despite my diagrams and explanations, my client STILL can not get his head around the fact that NTLM is not being used at any point. He thinks that if the challenge response window pops up, you are in NTLM's grip...

The moral of this story is to use the scientific method first to solve issues, rather than attempting to prove someone wrong, and that you are the smartest guy in the room first... Goes better for client relations too.

Monday, November 14, 2011

Installing FAST Search Server 2010 for SharePoint, Gotchas, That'll Get 'Cha!

My client has a need for FAST Search Server 2010 for SharePoint in their SharePoint 2010 farm. So, I went about installing it... Two tickets with Microsoft and three weeks later FAST is installed and configured. I think I ran in to every strange exception and gotcha that FAST has, and I haven't even started to configure it in SharePoint yet. Wow. This was an interesting one, especially since the install and configuring went so easily in my R&D environment. The install at the client site, absolute nightmare.

If you want the breakdown on how to install FAST, your best bet is to hit up Microsoft's instructions. They are very complete and go over everything that you need for any type of deployment: MSDN Instructions I'm not really going to go in to the install here. I might do a post on the deployment.xml file, but Microsoft does better with their install instructions than I could. However, if you are looking for the weird stuff, and the gotchas, you have come to the right place.

First you need to have Windows Server 2008 R2 64bit. FAST won't install on anything else. I chose to update the servers with all of the latest service packs and updates before beginning my install. This is a good idea to do, just because if you run in to a problem and you need to call Microsoft, the engineers there will likely insist on you updating your OS first. Updating first gets all of that noise out of the way.
My client wanted to have a fault tolerant system, so my architecture included two servers. In FAST terms that meant that I would have one "Admin" server, the server that would run the Administration service, and one "non-admin" server, a server that was part of the FAST cluster, but did not run the Admin service. Only one server in the FAST farm can run the Admin service.
In creating the multi-server deployment you need to create a file to tell FAST what services will be running where. This is a simple xml file that is referred as the deployment.xml. More on that little terror later.

Similar to SharePoint, you install the binaries on to your FAST servers then run a configuration wizard to configure the farm. That part is as simple as three or four clicks. Not a lot is done by the install program other than to do the usual file move, registry updates and assembly deployments. The configuration is where the real action happens. After installing FAST's binaries, I downloaded and installed FAST's service pack. This is a recommended step by Microsoft and just makes good sense. Why not do the service pack install before you configure your deployment? I don't see a valid reason why not. So that is what I do. For installs, I like to turn off the Windows Firewall so that I don't have any trouble with that service blocking any ports that need to be open for the install and configuration to work. After configuration is complete, I add rules in to the firewall for my newly installed services, and turn it back on. So, fortified by my experience with EVERY SINGLE Microsoft program to date and how they dealt with the Windows Firewall, I switched it off and wen to start the configuration.

PowerShell Requirements Now, FAST's configuration wizard is basically just a user interface that passes what it gathers and validates to a PowerShell script that actually does the configuration. In order to run PowerShell scripts, you first have to run a quick command in PowerShell to tell the server that it is OK to run PowerShell scripts. You can run individual cmdlets until your are blue in the face, but once you try to run those same cmdlets from a ps1 file, you get a nasty error. Sooooo, on gotcha that I managed to avoid right off of the bat is that I always set my servers to be able to run PowerShell scripts during their initial Windows configuration. Check out the Set-ExecutionPolicy cmdlet for more information.
I run a lot of my own scripts, and I never sign them, so I always set my policy to Unrestricted. It is a little bit of a security risk, but I mitigate that by setting the policy to AllSigned after I have completed my installation. All scripts that will be run on a regular basis should be signed and taken through the normal configuration management policy of good software design.

Service Account Requirements
The account requirements are kind of misleading, and you really need to be careful with them, ESPECIALLY if you are in an environment that is heavy handed when it comes to GPOs. I chose my Admin server and start to run the configuration wizard on that server. The first thing it asks is to enter in the username and password of the account that will be running the FAST windows services. I had prepared for this by making sure that the service account was a domain account, that FAST uses had rights it needed on the database server to create and configure databases, and I made sure that the account had the minimum rights for an account to run a service (log on locally, log on as a service). So, after I enter in the username and password, I get a validation error saying that the account is invalid. What?

I double check the hardware and software requirements and find that all I need is an account with log on locally, and that the install account is an administrator on the server.
The install account MUST be a member of the local Administrators group. This is a hard requirement, the script does a validation check. The goofy thing is, that if the install account is not a member of the Administrators group it is the validation on the SERVICE account that fails saying it is an invalid account. This was a HUGE gotcha for me.
So, after I added that account to the administrators group, I was able to get past that validation error.
The other minor gotcha here is that the account that the install script uses to do the database work is not the install account, it is the service account. So, before you begin the FAST configuration, be sure to grant the service account at least dbcreator rights. I like to set the account that is doing the database work to SA during the time of install. That way any scripts or whatever the install wants to do it can do on the database server. If you do this, be extra careful that you remove the service account from the SA role IMMEDIATELY after completing the install!!

Disjointed Namespace
The next section of the configuration wants you to enter the Fully Qualified Name of the server in to a text box. This I do, FASTAdmin.SharePoint.MyClient.com. Blam! Validation error: Please enter in a valid computer name. After going through the normal, spell checks and whatnot I found that I did indeed have a valid computer name. I went to another server in the same subnent, and pinged my Admin server to see if I got proper DNS resolution. I did. So... Whiskey Tango Foxtrot?
When all else fails... Read the log files. So I go to the log files and find that my LDAP look up is failing on the server FQN. What?? I learn that this might have a problem if your DNS name space is disjointed. What does that mean?
Well... Say you are in an environment that originally set up in a UNIX, or some other type of network that does not support Directory Services and DNS integration. To segment your network in to logical units, the network engineer used DNS to designate the divisions of the domain. This is done easily enough by adding prefixes in DNS through BIND commands. When everything is done you have a nicely segmented namespace with each division having prefix telling you exactly what division owns what computer. The FQN of the computers ends up being COMPUTERNAME.Division.MyClient.com.

Enter Active Directory. Active Directory uses a protocol called Lightweight Directory Access Protocol. It also tightly integrates with DNS. When configuring AD, it is recommended that it is implemented to closely mirror your DNS environment. So, for each DNS segmentation, AD should be segmented with a child domain. This is to ensure that LDAP resolution and DNS resolution both can occur. HOWEVER, Windows has some boxes you can check and settings you can hack so that you can have a single domain, yet keep your segmented DNS environment. This is called a disjointed DNS namespace. It causes problems...

Enter in Windows Identity Foundation. WIF is new in the Windows world, and what it does in introduces claims based authentication. SharePoint 2010 and FAST use claims extensively. FAST uses WIF for all of its authentication, and it just so happens that WIF uses LDAP exclusively to validate computer names. What happens if your LDAP and DNS do not match? Your LDAP query fails and you can't install FAST...
In my log file I see exactly that. The LDAP query that is failing, CN=FASTAdmin DC=SharePoint DC=MyClient DC=com does not exist because there is no CN=SharePoint. The SharePoint part of my FQN is simply a DNS prefix, not an actual child domain. I would have to fix this problem before I could move on.

I am happy to learn that the FAST team and Microsoft have addressed my very problem, and have fixed it in FAST Search SP1. So I download that guy and find the setting in the new psconfig.ps1 file that needs to be updated (set $disjointNamespace = $True) . I save the file and run it... only to see the exact same error pop up in my log file... Grrrrrr....

As far as I can tell with my work at the client site, and my R&D work, FAST simply will not work with a disjointed namespace. It failed every time I tried it. So, to fix the issue, the DNS prefix was removed and the FQN of the server was set to FASTAdmin.MyClientDomain.com. If you know how to get FAST running in a disjointed namespace, let me know!!

After the move, I no longer saw that particular exception in my log file... New and exciting exceptions awaited me!

FIPS Encryption The next exception encountered was a lot easier to solve... technically speaking. The solution, however kicked off a political firestorm that rages to this day. Anyway, this blog is not about office politics...
So, in the logs the exception is:
"This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms"


This one is easy to solve. Simply open the registry and navigate to hklm\system\currentcontolset\control\lsa\fipsalgorithmpolicy and flip the bit to 0. Close the registry manager and that is all there is to that one.

Microsoft Distributed Transaction Coordinator (MSDTC)
MSDTC is required for FAST to run. There is a part of the install script that will attempt to install this service if it is not installed. BUT if you have a GPO that blocks MSDTC from installing you will get the following exception:
Error Unable to get a handle to the Transaction Manager on this machine. (8004d01b)

This one requires that you change your Group Policy settings, then confirm that DTC is installed properly. Next go in to the properties and make sure that the check box for Allow Remote Clients on the security tab is checked. This is important if you are installing a non-admin sever later.

Firewall
Now things get really funky. Microsoft is notorious for adding their firewall product to their Operating Systems, then having their applications completely ignore that a firewall is even there. This leaves the user to scratch their head and wonder why they can't connect to anything on their computer. So, a common process is to disable the firewall completely, do the installs, configure the ports that are needed to access the application, write firewall rules for them, THEN turn the firewall back on. FAST Search Server breaks from this mold... FAST actively looks for the firewall, confirms it is on, then writes its own firewall rules. If the firewall is not detected, yup you guessed it, an exception is thrown and the configuration fails. You can turn the firewall off when after FAST is running, but during the install you must have the firewall turned on. If not, the configuration will not continue. Major headache.

PSConfig.ps1 Problems The final problem I ran in to on the configuration of the Admin server was that during the execution of the cmdlets that create the Administration Database, the script would somehow loose the FAST PowerShell snap ins. I don't know why, I don't know how. It seems to be a unique problem only to this client's environment, however I thought I would include it here, just in case somebody else is getting this exception and does not know how to clear it.
The following exception kept popping up during the configuration:
Exception- at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)

If you poke through the PSConfig.ps1 script you can see that when this method is called, the install script spawns a new PowerShell instance. For whatever reason it was when this instance was spawned that my FAST snap ins would be lost.
What I did to clear it was to trace back to where the method was being called, it was in another file called commontasks.ps1, located in the %FASTInstallDirectory%\installer\scripts\include folder. At line 2118, I added a little logic to detect if the FAST snap ins were loaded, and, if not, load them:
If((Get-PsSnapin|?{$_.Name -eq "Microsoft.FASTSearch.PowerShell"})-eq $null){$PSSnapin = Add-PsSnapin Microsoft.FASTSearch.PowerShell -ErrorAction SilentlyContinue | Out-Null}
This code will ensure that the snap in is loaded and the configuration can continue. What I found out later was that because of this problem, my database did not get created. Not really a big deal, because the configuration of FAST doesn't write anything to the database it just configures the FAST services' XML files and other such things. After the configuration completed, a database will need to be created for FAST to use. It must be created before you do any other configuration, such as adding a non-admin server or connecting FAST to the SharePoint farm. Fortunately, it is very easy to do, provided you use all of the same database settings that you passed to the PSConfig.ps1 script.
Run the following in a FAST Administrative PowerShell instance:
Install-FASTSearchAdminDatabase -DbName YOURADMINDATABASENAME -DbServer YOURDBSERVERNAME -Force

It is important to realize as you run this particular cmdlet, that it will run as the account that you are logged in as. So, be sure to run it using an account that has at least dbcreator on the database instance.

After all of these problems, my Admin instance of FAST Search Server 2010 for SharePoint was complete! My problems were over right?? WRONG!!

Deployment.XML File and IPSec Requirements
I'll tackle these two issues together, because they are closely related, and throw the same types of exceptions... Grrrrrr...
After the pain and suffering of installing the Admin server, the non-admin server should have been a walk in the park. I knew all of the gotchas, and I was able to avoid them during the binary install, and most of the server configuration. But as soon as I attempted to configure the non-admin boxes, problems started to pop up.
After running for a good amount of time, the configuration would fail, and the following exception would be in the log:
%FASTInstallDirectory%\bin\MonitoringServiceConfig.exe" Output - Error: The file %FASTInstallDirectory%\etc\middleware.cfg' was not found.

What??? Can an exception get more cryptic? Why not just throw "Object reference not set to an instance of an object"? That would be equally as useless, and appropriate at the same time! OK, rant over. This post is too long to subject you to my feelings on Microsoft's exception handling.
What happens during the configuration of a non-admin server is that the non-admin server will attempt to make an IPSec connection to the admin server, and download a series of files that configures the services on the non-admin box. In this way you can set up many non-admin servers quickly, with only having to enter in configuration data once. Sounds great right? Right...
The problem is that the configuration script does not check to see if this whole download procedure completes successfully. It happily chugs on to validate if the files that were supposed to have been downloaded exist. If they don't exist, then the script blows up, and you get the idiotic, meaningless exception above...
So, again PowerShell to the rescue. You can attempt to run the IPSec connection and file download using a PowerShell cmdlet. The good news with this cmdlet is that you will get a MUCH better exception if it fails.
The cmdlet is as follows:
Set-FASTSearchIPSec –create
The script will prompt you for a username and password, use your FAST service account.
The cdmdlet would chug a long for a bit, then produce the following exception:
An error occurred while configuring IPSec - Could not connect to the admin node.
This may be because of,
  1. Invalid admin node name
  2. Invalid baseport. Baseport of admin node and non-admin node must be same
  3. Admin node is not up and running
  4. Missing IPSec rules on admin node. If you added this host to the deployment.xml after running this script on the admin node, you need to rerun the IPSec cmdlet on the admin node
Awesome... What if, as in my case, all of that stuff is good? What if when you run this same cmdlet on the admin node everything is awesome??? You have to look at the underlying technology to figure out what is up.
What is the underlying technology? IPSec. What is IPSec? Internet Protocol Security. On Microsoft Operating Systems, what does everything that uses the Internet come down to? Internet Explorer. I freaking E.
Here is what is going on. IPSec and IE use the same connection settings, for some goofball reason, configured in IE. So if you go in to IE's Internet Options, Connections Tab, and click on LAN settings you see a little check box that toggles if IE automatically detects the Internet connection settings. What this is really doing is sending out a broadcast to see if an Internet Proxy Server responds. If it does, IE uses that Proxy server to connect to the Internet. If it detects nothing, the process will time out and IE will happily connect directly.
If this box is checked, IPSec will attempt to connect to the Internet the exact same way as IE... Only IPSec doesn't handle the time out like IE does. If IPSec does not detect a Proxy server, IPSec up and fails. Fun, huh?
You clear this problem by unchecking the Automatically Configure Settings box, or, if you are using a proxy server, manually inputting your proxy server settings.
This kind of crap is why people hate Microsoft so much. If you integrate everything, fine integrate EVERYTHING. Don't integrate half, then just quit!! It is frustrating as all outdoors when you run in to a problem like this. Why would you check in IE for an IPSec issue?????? It makes zero sense. It is like checking your MS Paint setting so that your video card will work correctly.

With that exception cleared we are good to go, right? Wrong... Run the configuration script again and... same error. Repressing the urge to go on a multi-state shooting spree, I run the Set-FASTSearchIPSec cmdlet again to see what is happening now. A new exception now graces my screen:
XML Validation error: Data at the root level is invalid. Line 1, position 1.
Really??? Really??? What XML file???!!!???
One of the files downloaded is the deployment.xml file that you configured and put in to your Admin server configuration. This file tells the configuration script what your indexing configuration is going to look like, which servers are running what service, etc. But there is a little catch. If you try to use Visual Studio to create your XML file, Visual Studio will create the file using an encoding method that FAST doesn't like, and it will add a single bit character to the very beginning of the file. That will cause everything to go kerplow!! Stupid, right? Yup!! Sure is. Can't use Microsoft's products with... Microsoft's products.
So how to clear this exception?
First you need to go back to your Admin server. When you get there you need to navigate to %FASTInstallDirectory%\etc\config_data\deployment and find the configuration.xml file. Open that guy up and copy everything but the encoding statement at the top of the file. If you don't have an encoding statement at the top of the file, get everything.
Rename deployment.xml to something else... Microsoft_Is_Stupid.xml sounds good to me.
Open up Notepad and paste everything in the new file that it creates when the program is stared. Save the file in the same location as the other file, and name it deploment.xml.
Now open up a FAST Admin Shell and run Set-FASTSearchConfiguration. Or restart all of the FAST Services, or reboot, whatever...
After all that is done and the Admin server is back up, run the Set-FASTSearchIPSec cmdlet again. Confirm that it completes successfully. When it has, re-run your configuration script. It should complete successfully this time.

I realize that a lot of these problems come from FAST being a newly acquired software package. I know that the developers on the FAST side of things are working to come in to the Microsoft fold, and that this version of FAST is a "1.0" product. I realize that my client's environment was unique considering their security zeal. This eases the pain of the install a bit, but what really grinds my gears is the absolute lack of documentation of these issues. I had to search blog after blog after blog to find out what was going on both under the hood and with the exceptions. Microsoft provided very little in terms of providing support. Sure, I learned a TON about how FAST works and all of the moving parts that go along with configuring it, but I paid for that with the stress and absolute agony of this install.

Thursday, October 13, 2011

iOS 5

I installed iOS 5 on my iPhone 3GS. It was a rough start, but I am happy with the end result.
For the last several iterations of the iOS on my 3GS has really slowed it down. The latest 4.3.5, slowed down my phone so much that I was thinking Apple was purposely slowing down the 3G series phones to force them to upgrade to the 4S. Unlike Microsoft that benifits from speeding up old hardware, Apple requires new products to be moving all of the time to keep their stock price up, and their company solvent, as no one can install their OS or use their software on anything other than Apple provided hardware... and Jobs said that Bill Gates was too uptight.. So slowing down older hardware would be right in their best interests, and therefore not a large logical leap to assume it was happening.

With iOS5, I found that my phone not only performs much faster, the battery life and the signal strength seem to have improved as well. I can't yet tell about calls dropped, as I have not had the time to really test that particular annoyance of the iPhone out completely. Weather this is simply a new algorithm used to compute the signal and battery strength I have no idea. I do remember seeing my signal strength greatly improve when I installed iOS4, only to be disappointed later when I found out that Apple was displaying strength greater than what the strength really was. This could be more of the same, but I want to believe that Apple has improved their software and thus found ways to improve their power usage, and to detect signal.

The new features are very welcome, especially the new camera features. The camera can now be accessed from the locked screen and you can take a photo by pressing the up button on the volume hard button. Infinitely easier to take a photo of yourself using this method than trying to find the soft button. I would always miss and end up in the photo album...

The new notification center is also very cool. You get a plethora of app data in there accessed by simply pulling it down from the top where the clock usually is. I know this is a feature they stole from Android, but it is a welcome addition.
Being able to choose how notifications are done is a great change as well. You can have your less important push notifications sent to a banner rather than to the non modal alert box. Very nice very good.
Apple is pushing iCloud, but I really am not very interested in that. I might be later, but right now... Meh.
The final new feature that I really like is that you no longer need to attach the phone to iTunes to update the software. This is a BIG help. However it kind of spooks me, because of my experience with updating my phone to iOS5.

During the install, my phone crashed, and I had to restore it back to factory preset, then I had to restore my settings and Apps. Then I had to put those apps back in to their folders. That took a long time and it sucked. Double sucked... If I were performing this update with out iTunes, would I be out of luck? With iTunes, I get a backup and I can restore the phone to somewhat of a state that it was in before. With out iTunes, will the phone just puke and then be dead? Something to think about... I know you are thinking, what about the backup that is made on iCloud. Think about it, what if I do a phone update somewhere I don't have access to a computer with iTunes for some time. Without the computer, I can't set the phone back to factory then do the restore of the phone. Will the phone just be toast until I can find some computer to plug it in? Or does the phone update by throwing everything in to memory... Sounds very inefficient, and lots of security problems come immediately to mind, but that never bothered Apple before...

All in all, good update, and I reccomend it for all of the iPhone 3GS users out there. With this update I think I can put off getting the next version of the iPhone until iPhone 5 comes out.

Wednesday, October 5, 2011

Hiring for SharePoint

I am at the SharePoint 2011 Conference this week and I have found a reoccurring theme popping up with the manager types that come to the conference. Most of them are overwhelmed with the breadth of the SharePoint product, and nearly all of them realize that what they have in place to support SharePoint is not what they need.

I was talking to some of the other participants about a particular project they were involved it. He was involved in trying to deploy SharePoint as a fault tolerant system, and I was running down what he needed to do. We noticed that we were attracting a lot of attention from the leader types. You know, they are the guys in the button up shirts trying not to look lost in the sea of geeks. After a bit, the managers started to ask about how they could go about hiring the right people to set up their SharePoint environments "right." The first thing I said was that to set up their SharePoint environments "right," they needed to spend some serious money. In my experience, SharePoint needs a dedicated cross-functional team in order to be successfully deployed and maintained. The word that really stuck in the manager's craw is "dedicated." Even after seeing all of what SharePoint can do, they still believed that there would not be enough work for dedicated staff. This just isn't true. At any company were there is not dedicated SharePoint staff, immediate problems start to pop up, mostly surrounding the direction of the product, and what it can do. If staff is not dedicated, there is no incentive for an individual to study the features of SharePoint, and find out how it can be best used in the company. There is no one to support the product, leaving the end users wondering how to use it. Essentially, if you don't have dedicated staff, you are throwing the product down the tubes, and dooming your deployment for failure.

As to the hiring question... This spawned a long and interesting conversation about how many people were needed, getting input from the tech side, me, and the business side, them. All in all, those that accepted the fact that they need dedicated staff saw value in the team that I suggested.

In my ideal deployment, and this means ANY deployment, from one server to 10,000, you are going to need four roles; Architect, Administrator, Designer, and Developer. The most important, most senior, and most critical is the Architect role. This person will lead the team, develop the overall vision of the deployment, be the chief evangelist, and mentor the other roles. As you can imagine, this role is also the most difficult to hire.

Architect

This should be the first person that is hired, and should do the hiring for the rest of the team. This is a special person, because they need to have a split IT personality. In that I mean that they need to have experience with software development, and systems administration. It helps if they also have some Network Engineering in their background as well. What I mean by software development experience, is that they need to have a firm grasp of OOP, the .NET Framework, and modern design principals. They need to have written code as a developer on a software team. A person who has written code, but as a one project, one programmer shop is not your ideal candidate, as these developers often have a tough time leading a team of people.
They need to have designed and written WCF services, web services, and windows services. They need to know the role that each play, and how to integrate them in to a cohesive SOA. After all that... They need to have a good grasp of the SharePoint API, and they need to have written and deployed the major pieces involved in SharePoint development. What major pieces? Web parts, timer jobs, site definitions, and workflows. They need to know what each part does and what problems each solves.

The Architect should be able to speak to the database staff about indexes, backups, and SharePoint database architecture. They should also have a good grasp of what Remote BLOB storage is and when it is a good idea. They should be able to present this to the database staff.

On the other side of the IT fence, the Architect needs to know how SharePoint is installed, configured, deployed, and maintained. In other words, this person needs to have spent time as a professional SharePoint Administrator. As an administrator this person should have handled at least one major farm migration. This will ensure that they know what is involved in a migration, and that they have all of the lessons learned from that migration. It also ensures that they have been involved in SharePoint long enough to have seen where Microsoft has come with the product, and have some idea on where Microsoft is going with the product.
They need to have planed for and executed disaster recovery plans with SharePoint, and should be very comfortable with recovery scenarios, from a total loss of the farm, to the loss of a single file.

They should be well versed on search. How SharePoint does it, what the limitations of the Out Of the Box search features are, and when it is a good idea to put FAST Search in. They should know how to configure search so that it will return the proper results.

They also need to be a very well rounded IT person. They should know about the different methods on how to make SQL redundant, they should know how to load balance a web server, they should know what LDAP is, and what a good Active Directory design consists of and how it will impact the SharePoint deployment.

With all of this experience, it will be very difficult to interview for this position. You have to accept that the SharePoint stuff you will have to simply ask about and hope that they give you a straight answer. You are an IT manager, so hopefully you have some skill in the area of bullshit detection... If you have development staff, bring in a developer to aid in the interview. You will get a good grasp of their development skills from your trusted developer. Also bring in one of your Systems Engineers to find out what they know about AD and other Windows related services. If you have web staff, bring one in to discuss web topics. Your trusted staff should be able to give a good rundown on the candidate's general skill. From this you should be able to judge if the candidate is being truthful about what is on their resume.

Here are some of the qualities that I think that an Architect should have personally:
  • Passionate about SharePoint, this should be evident in the interview.
  • Passionate about IT in general.
  • They should have some decent social skills. They will end up being a cheerleader and evangelist for SharePoint and its features as well as effectively lead the SharePoint team. Thus, they have to be able to communicate effectively with other humans.
    With all of the technical skill we are asking for, this particular feature will be some of the hardest to meet...
  • They should be worried with improving their skills. They will ask about training and conference budgets.


With all of the above, you must realize that this person will not come cheap. This kind of skill comes at a premium, and you have to giver proper incentive for them to stay. The good news it that you can save money on the other members of the team.

Administrator
The Administrator position is going to be easier to find. You can ask for some SharePoint administration experience, but it really isn't required, especially if you want to save some money on this position. In a perfect world, this position will have about 2 years experience with SharePoint as an IT pro.
The necessary skills, in my opinion, are going to center around, in order of importance, IIS, PowerShell, software deployment, Windows, and experience supporting "N" tier applications. SharePoint experience would fit somewhere between Windows and supporting "N" tier applications.

IIS is the key in this position, because SharePoint is a .NET program hosted in IIS. Without a firm grasp on how IIS works, how applications are hosted, how WAS works, the administrator will fall flat on his face when the fit hits the shan. SharePoint deals heavily in Web Services and WCF and if the administrator has never supported those two things in IIS, they will be baffled on why the sections hosting the different technologies use the same authentication, but have different settings in IIS. Without web experience, the critical difference between a web application and a web site will trip them up. These things are vital to the SharePoint farm.
I would like this person to be up on the latest load balancing techniques so that they can put in to practice the load balancing methods that the Architect selects for the deployment, but this is not necessary and it can be something that the Architect mentors the Administrator on.

Experience with PowerShell is absolutely vital. Much of SharePoint administration is done through PowerShell, and many tasks can be automated with PowerShell. Being only one person, the Administrator who only uses Central Administration can only do so much. The Administrator who automates and scripts tasks multiplies their worth to the company, by being able to clone themselves many times over. Also, many tasks need to be handled off hours, the stupid Administrator stays late, or remotes in from home to complete these tasks via Central Administration. The smart, savvy Administrator, the one you want to hire, scripts all off hour tasks and has the script send him email confirmations of task completion or failure. No muss no fuss. An Administrator that automates the most common tasks also builds in automatic redundancy for themselves and has more time to focus on interesting and important work, R&D, and preventative maintenance. The administrator who does not script is constantly putting out fires, repeating the same tasks over and over, and never seems to have any time to R&D and do anything preventative.

As the developers will be handing the Administrator their code to deploy, it is a good idea to look for an IT Pro with experience in deploying software. Again, you want to see this person insist on scripted deployments, as well as packaged solutions. It makes their job easier, and makes a solid repeatable, predictable deployment the rule rather than the exception.

Supporting "N" tier applications forces the Administrator to always be thinking about what service is calling what. It also will mean that the candidate knows about authentication methods that can handle multi-server hops, or what must happen so that authentication methods that can not multi-hop on their own make those vital hops.

Developer
Again, it is not vital that the Developer have SharePoint development experience. It is vital, however, that the Developer have .NET web development experience. From the developer standpoint, SharePoint is just another API to call. Take away the funky deployment, and SharePoint development is just like any other development, inherit a base class, and implement a method. A week training class will get the Developer up to speed on SharePoint development. After that the Architect will be able to mentor them and guide them to become SharePoint developers.
What we do want out of a developer is a deep sense of curiosity. We want them to want to know why things work the way that they do, and once they know that, they should wonder how they can manipulate those things to fit what the business wants to do.

Designer
The most common flaw that very technical people have is that they have no eye for the aesthetic. Their art is in their code, or in how smoothly their servers run. Unfortunately, no matter how brilliant the code behind the site may be, if it is not pleasing to the eye, or if the user experience is bad, the deployment will fail. This is where the Designer comes in. The designer is a person who should be able to design functional, pretty, web sites. I also like my designer to be experts in the client side scripting languages that make their visions possible. JavaScript, JQuery, CSS, Flash, and Silverlight should be on the Designer's resume along with the ability to work with technical people. In my opinion the designer should be able to create a UI, then hand that UI off to the developer to wire up the functionality. In other words the Designer creates the button and puts it on the page, the Developer takes over at the mouse click.
It is not required that the Designer know about SharePoint. SharePoint experience is good, but it is far from the most vital thing you are looking for.

DBA
This last position is not listed above on my three position team list because this position does not need to be 100% dedicated to SharePoint. SharePoint handles most of the database work all on its own, with as little human interaction as possible. BUT since a major part of the SharePoint functionality depends on the database it is important to have a DBA that can be the "go-to" person with SharePoint team issues. It will be up to the DBA and the Architect to make the redundancy decisions with regards to the SQL farm. If the Developer needs to hit an external database for some reason, the DBA is who they go to to get permissions and other things in line. When the Administrator is setting up Business Intelligence, they will need the DBA to create data cubes in Analysis Services.

With these positions filled, your SharePoint deployment is set up for success. SharePoint is a unique product, and not one that can simply be thrown in to an environment. It needs dedicated people to support and maintain it so that you can get the most out of the product.

Tuesday, August 9, 2011

Microsoft Comes Calling

I have been ignoring this blog...  I know...  I start to write something then I get busy and well...

Anyway, I get calls and emails from recruiters all of the time.  Most are for consulting jobs here and there, all require me to move somewhere, sometimes for a three month contract.  No thanks.  I get these contacts because I try to keep my resume and job information updated on the Internet sites.  It keeps the head-hunters informed on what I am doing, and the occasional call or email for the one off silly contract here and there is a price that I pay.

The reason I do such things is for the rare occasions that a good legitimate opportunity comes my way.  I found my last job this way, and now something very interesting has come my way....  A recruiter from Microsoft has given me a call.  I have to admit that I have always been in awe of the Redmond company.  Despite their shortcomings they have been one of the most innovative and influential companies in human history.  Getting a call from the recruiter was a big deal for me.

After the initial shock wore off, I reminded myself that Microsoft is just another company, and the opportunity, while lucrative, is just another opportunity.  I need to evaluate the company and the opportunity objectively and with an open mind.  I have to approach their interviews with the same confidence that I would any other position.

The big difference in approaching this interview process as opposed to most interviews that I do is that the interviewers will be as or more technically sound than I am with the SharePoint line of products.  Most interviews I do, the interviewer has a basic understanding of the product and thinks they know what it can do for them.  I come in and, during the interview, kind of give them a rundown of what SharePoint can actually do, and let them know if the product will work for them.
In this case, the people I will interview with will not only know the product, they will know where the company wants to take it, and will have had a hand in bringing the product to where it is today.

At any rate, I am simply in the phone screen stages of the interview process.  I have not been asked to Redmond, I have not talked with anyone other than the recruiter.  I will be speaking with the hiring manager soon, and from there, who knows?  It is an interesting opportunity, and I look forward to the process, IF the hiring manager likes me well enough to bring me in.  I have heard many horror stories about Microsoft interviews, so I hope that they will be gentle with me.  I will post my experiences...  that is if I get an interview...

Tuesday, February 8, 2011

Adding the SharePoint Snap In to PowerShell

What's the first step? If you are using the ISE, and you want to have the scripts excited from any PowerShell environment:
Add-PSSnapin Microsoft.SharePoint.PowerShell

This line adds the SharePoint snapin and makes any PowerShell console the same as the SharePoint Management Console.

If you want to get cute you can add the following code to see if the SharePoint snapin is available, and throw a friendly error message if it is not:

$snapin = Get-PSSnapin | Where-Object{$_.Name -eq 'Microsoft.SharePoint.PowerShell'}
if($snapin -eq $null)
{
Write-Output('SharePoint snapin does not exist on this server')
}else{
Add-PSSnapin Microsoft.SharePoint.PowerShell
}


As long as you know the namespace of the PowerShell snap in that you want to add, the above code will work to add it simply replace the SharePoint snapin namespace (Microsoft.SharePoint.PowerShell) with the namespace of your snap in.

Monday, February 7, 2011

Windows Server 2008 R2 High DNS Memory Usage

I was messing around with the Desktop Experience on my Windows Server 2008 R2 server and decided that I wanted to use one of the gadgets that I found useful with Windows 7. The gadget shows all of the core processor usage and the memory usage on the server.
This was a domain controller in my R&D farm, it is also my Hyper-V host so I kind of use it as a workstation as well. Don't judge me...

I installed it and found that my memory usage was abnormally high. Investigating, I opened Task Manager and found that DNS was eating up about 35MB per core. What the??? DNS should be nearly non existent. So I started to do some checking.

I hooked up my handy dandy Colasoft trace tool and found that I got a ton of errors when trying to connect to my ISP's DNS. Something about wrong format, more specifically "Additional Record: of type OPT on class Unknown DNSClass". Failure after failure after failure. It was literally like my DNS service was caught in some sort of infinite loop. What is this noise?

I found out that Microsoft was trying to be proactive with Server 2008 R2, and, by default, they enabled a new DNS format that hasn't quite yet been adopted by everybody yet. Like IP V6, this format, EDNS, is not yet everywhere. Bad news for me, because it was causing me pain.

I really don't have a whole lot of experience with DNS, other than adding records and whatnot, so how to disable this EDNS?
I found on TechNet there is a little tool, dnscmd, that will do the trick. So, all I did to disable it was to run dnscmd /config /EnableEDnsProbes 0

Immediately, my memory dropped from 35MB per core to about 2MB total. ColaSoft reported no new format errors. For now, it seems as if EDNS is not yet ready for the real world.

Tuesday, January 11, 2011

Lots of VMs To Create In Hyper V? Sysprep!

Unless you are only doing one or two virtual machines with Windows Server 2008 R2 in Hyper V, or any virtualization software or Windows OS for that mater, you will want to save yourself some configuration time by creating an image of the OS you will be using. Windows gives us a very easy utility to do this called sysprep. It is found in the Windows directory on your system drive under System32\sysprep.

Since I do a lot of work with SharePoint and all of its related systems, I create several images so that I can quickly create a server based on what I need at the time. First I create a "base" server image. This is the generic install of Windows, no Roles installed. This gives me an image that I can use for any custom purpose. One thing that you MUST do first is to run Windows Update. You do not want to waste time on each VM getting it up to date. Do this first, then use sysprep to create your image.

After making an image of this disk, I will use the image to create a new server and then add the specific roles that will make it in to a generic image of the server and role that I want to have on hand. I always create an image of a server with IIS, and the latest versions of the .NET framework.

For technical reference, Microsoft has a page for the command line options. You could use the UI, but... I don't trust it. The line that I use is the following:
sysprep /oobe /generalize /quiet /shutdown
This removes all of the security information that is created when the server is first installed, and will put the server state in to the welcome mode to prompt you for information when you start up the server the first time.

From here copy the .vhd file somewhere safe. This is the generic disk master that you will create all of your new VM with. Be sure to name the file well. I like to name it with the OS, the role, then the last time I ran Windows Update. When updating gets to be too much of a pain after the creation of a VM, I will create a new image.
When you need a VM you will make a copy of this file, move the new file in to your .vhd directory and attach your VM to this new .vhd file.

Of course, there are more and more things you can do with automated deployment. Microsoft even has a very interesting program and tool kit that you can download and use to create all sorts of answer files and what not for large unattended image installs.