Friday, March 23, 2012

Need a Connection String?

This handy little trick has helped me for years. I have used it so much and so often, that I forget that it really is almost a "hidden" part of Windows.

So, you need a connection string to a database, AND you would like to test that string before you plug it in to your application. Doing this is actually very easy. First right click on your desktop and create a new text file. Name the file what ever you want, but change the file extension to be ".UDL." This stands for Universal Data Link. Now double click on the file. This is what you will see:

The Provider tab will list all of the available providers installed on your computer. Choose the one you want. In this case we are going with the ODBC provider for SQL server.
The next tab is the Connection tab. This is what you see:

Here, you put in the server name, choose the type of authentication, and select the default catalog. The best part is that little button towards the bottom right "Test Connection." With the simple press of this button, a connection attempt is made using the settings and credentials you have provided. If the test succeeds, you are good to go, if not, well then you know you have a problem.

The advanced tab really isn't that interesting, you can set your impersonation level and other things if the provider supports that kind of thing, but really, it isn't vital to the success of your connection string. The really interesting tab is the All tab.

This tab has all of the settings you can tweak for your selected provider type. If you need to set an Application Type, you can do that here, time outs, Network Address, whatever you need to set here is the spot.
After you have everything set, and the test connection succeeds, click OK. That will save all of your settings. Then right click on the file and select "Rename." Change the file extension back to ".txt" Now double click on the text file. TA-Freaking-DA!!!

There is your tested, working connection string all ready for you to copy and paste in to your application.

Monday, March 5, 2012

Big Things Coming!!!

It seems as if the thing at Microsoft over the past 10 years or so was to sync major updates to their products every three years or so. So, in keeping with this new tradition, Microsoft is releasing a TON of products over the next year.
Take a look a what will be rolling out in the coming months:
  • Windows 8 Desktop
  • Windows 8 Tablet
  • Windows 8 Server
    • HyperV
  • Internet Explorer 10
  • IIS 8
  • Office 15
    • Word 15
    • Excel 15
    • Access 15
    • Visio 15
    • Outlook 15
    • InforPath 15
    • OneNote 15
    • Publisher 15
    • Office 365 Updates
  • SharePoint 15
  • FAST Search For SharePoint 15 (I haven't seen anything on this yet, however I assume that a new version would be released with SharePoint 15)
  • .NET Framework 4.5
  • Visual Studio 11

What do I find most exciting about each of the releases? Well, I haven't really done much other than read about enhancements, and new features, but some of the reading I have done had gotten me excited about many things.

Windows 8 Desktop and Tablet

I am really excited about Windows 8 for Desktops and Tablets. I think that the touch interface is going to revolutionize the notebook and tablet industry. Think about a Tablet, like the iPad that has all of the power and usefulness as a desktop computer, AND has the touch screen usability and app interface that you have come to love. Think of a notebook computer that can give directions and do text messages, yet still has a keyboard for typing documents. It all rolls in to one with Windows 8. I can't believe that Apple did not do something like this amalgamation before. They are the hardware company that builds their own OS after all. But they wanted to keep their OSes separate, and now they will be playing catch up with Microsoft, again.

Windows 8 Server

I am just starting my preview of Windows 8 Server. Right off of the bat, there are somethings that you notice about Windows 8. First and foremost, after you get past the new UI, you notice that there is a theme of Local Server and All Servers. Previous versions of Windows were designed to be islands of functionality. Servers could be linked together, but only as an afterthought. All services needed to be installed on each individual server, and then connections could occur. In Windows 8 the central theme is that there is more than one server that will be joining in the roles that you will be deploying on the server. So on servers that you deploy with GUI are designed to act as central hubs for the deployment and administration of other similar role servers. New is an interface that can report events from several servers, as well as an interface to administer these servers from.
Dynamic Access Control is a new feature that allows admins better control over the permissions inside of your file structure taxonomy. Rules and policies are much more friendly to layer on top of file structures.
There are so many changes to HyperV that I added it as a bullet under Win 8 Server. HyperV steps up with the big boys allowing for large clusters, fiber optic support, and many performance increases. Combine that with Live File Migration and HyperV addresses many of the advantages VMWare had over the Microsoft technology.
Microsoft listened, and stole, all of VMWare's advanced networking features and integrated them in to the new HyperV Virtual Network Switch. In 2008, the virtual switch was introduced to deal with multiple computers sharing a single networking interface. Outside of that, it really didn't do much. Combine it with RRAS and you really had some confusing and difficult configuration on your hands. Those days seem to be behind us with the implementation of port ACLs, private VLANs, per-vNIC bandwidth reservations, QoS, metering, OpenFlow support, VN-Tag support, and network introspection. I admit that I haven't had time to play with all of this stuff yet, but I have been very vocal with Microsoft on the lack of private VLANs on their virtual "switch" in Server 2008. Why call it a switch if it has none of the functionality of a modern network switch? Should have called it a virtual hub. It would have been more accurate.

Internet Explorer 10

IE 10 changes are mainly to incorporate all of the changes to the "Metro" UI that comes with Windows 8. Support for HTML5 is the big game changer, especially with support for Web Sockets. More on that later.
Performance was the name of the game with this release. Microsoft was looking to take down its major competitor, Chrome, and beat it by being the fastest browser on the market. Did they do it?? No. I don't think so. I'll test again when the RTM version is released, but the latest beta isn't nearly as fast as Chrome.

IIS 8

I am primarily a web guy and the first thing I did when I installed Windows 8 was to add the IIS role so that I could get a look at IIS 8. On the surface, there isn't a whole lot of difference. However if you do a lot of work with large site deployments, or if you host a good number of sites, you would definitely notice a difference!
The biggest change for "normal" users of IIS, the ones who run in house corporate intranet or SharePoint deployments, is going to be Application Initialization Module. Ever since the first version of the .NET framework users have complained that the first time the hit an ASP.NET site it takes forever to load. This is because a bunch of things happen behind the scenes to make the web site work. Check Just In Time compilation if you want to know more. (This should get you started)
This delay is supremely annoying, and the worst part is, the more complex the application is the longer the wait. Add that wait to all of the supporting services and applications, and the first time a user hits a complex site,like SharePoint, they could be sitting for quite a while. With IIS 8, Microsoft has finally solved the problem. Applications and sites can now be set to always run, essentially never allowing the worker processes to expire, and automatically starting up if the IIS services are restarted.
This is a big deal, and one that has to be undertaken with careful planning. One of the major reasons for JIT in .NET is to save on resources, and if you set all of your applications to always run, you will waste resources on applications that don't necessarily need to be running.
If you do a lot of work with SSL and certificates then you will be happy to know that Microsoft has done quite a bit to aid you with SSL. SSL can now use host headers so that multiple SSL sites can be run over the same IP address. IIS 8 also has a feature, called Central Certificate Store, that will allow you to store certificates in a centralized place, allowing for easier deployment as well as easing use in a cluster.
The last big change is for companies that have a lot of sites, and each site has a lot of custom configuration. This custom configuration of many sites makes the IIS XML configuration files, such as applicationHost.config, grow to be very large files. These large configuration files caused IIS to start very slowly and, in some cases, could even cause IIS service to time out during start up. IIS 8 is designed to load these large configuration files easily and quickly, making this particular problem a thing of the past.

Office and SharePoint 15

Not a whole lot is known about the next version of Office and SharePoint. Many things are speculated, but only a few are known for sure. For sure SharePoint 15 will have an "App" store. This means that companies can release "Apps" for SharePoint. This will allow users to connect directly to SharePoint via the Windows 8 "Metro" interface without having to go through a browser. I am interested about this particular piece of SharePoint 15, especially how it will tie in with the already familiar web interface.
Information Rights Management will FINALLY be a first class citizen in SharePoint 15, and will deploy as an Application Web Service. With all of the document retention and record center stuff put in to MOSS 2007 and SharePoint 2010 IRM should have been integrated in to SharePoint many years ago, BUT I guess Microsoft was too busy creating the ribbon interface to bother with something so obviously needed in ANY records management system. Sheesh.
Because SharePoint is being used more and more as a Knowledge Repository, a new module will be released that will make SharePoint a true Learning Management System. There are a TON of LMS out there, and I find it odd that Microsoft is only now trying to get in to the market. The real interesting thing to see will be if the current LMS companies will embrace SharePoint and move their functionality over to it, or if they will see SharePoint as a threat and attempt to compete with it.
It is also widely speculated that SharePoint 15 is being built on the 4.5 framework. I'll get in to this a bit more below, but with the 4.5 framework's heavy emphasis on HTML 5, could it be possible that Microsoft thought ahead and built SharePoint 15 using Web Socket technology??? That would be super cool!!

Not a whole lot is known about Office 15. What is known for sure is that all of the office apps will be created to actively interface with the "Metro" UI. That means that their UI will specifically be developed so that you can use the applications with just your fingers. I don't know if this will do away with the ribbon, but it just might.

.NET Framework 4.5

Lots of really cool updates to the .NET Framework 4.5. Of course 4.5 will support the creation of Windows 8 apps and the Metro UI. I don't think anyone used Microsoft Touch to do anything at all, so it should be a relatively new API to play around with. New to us anyway.
The Managed Extensibility Framework (MEF) becomes a first class citizen in the new framework, moving up from an add on in framework version 4.0. (check this out for information on MEF)
Changes to ASP.NET surround HTML 5. This makes sense, as IIS 8 now supports HTML 5. I see what you did there Microsoft... Very sneaky!!
The last super cool thing in the 4.5 framework is additional support for parallel computing. I haven't done any work with parallelism, but I have always been intrigued by it. Check out this blog for the skinny on what is new in the 4.5 framework

Visual Studio 11

With the release of a new framework, a new OS, a new UI, AND a new version of SharePoint, there must be a place where developers can create all sorts of new stuff for them. In short a TON is new in Visual Studio to accommodate all of the new developer stories for the new products. I really don't want to list them all here, So you can go here to check out all the cool new stuff.

It's a brave new world out there, and, if you are smart, you will begin to look in to how you can learn about the new stuff that will soon be in great demand. I am just upset that I will have to take all of the tests over again... Maybe I will wait a year or so this time... but I know I won't...

Tuesday, January 10, 2012

Working From Home

I do a lot of work from home... In fact the last several contracts I have had I have worked out of my home office. Before that I worked for a company 400 miles away as a remote employee. All told I have about two years of experience working from my home. It is an interesting situation, and I am going to outline some of my personal findings.

Who Can Work From Home?

Working from is a very strange dynamic. If you want to work from home first and foremost, you MUST be a self motivated person. You have to be the person who does not need outside accolades to keep you moving. The completion of a job and self excellence must be your major motivators, or you will not succeed working from your house.
You have to be the kind of person who is OK with being alone for very long periods of time. In the same vein, you have to be the type of person who doesn't mind that the only people they interact with in person are their family members.

Loneliness

When you work from home, loneliness is a constant theme in your life. When you work in an office, there are always people around and things to do. If you get crazy and just want to goof off for a bit, you can always find someone to talk to for a bit, or you can find someone making coffee, or you can just do something other than work. At home, your distractions are much more dangerous, and there is nobody around. When you get up to do something besides work, there is nobody around. Nobody.
You miss all of the daily things that happen in people's lives when you work from home, however you will know that these things are happening through your normal office email. You can't see Bob's new car, but you will be invited to take a look at it at 10am today!! You won't taste Marry's famous lasagna, but you know what room your coworkers are eating it, and you will see all of the thank you and accolade emails that come afterward. You won't see Jon and Sally's new baby that they brought in to meet everyone. And you certainly won't go to HuHot for Jenifer's birthday. You won't be there.

Out of Sight Out of Mind

If you are set on moving up in your company, you will want to avoid Teleworking like the black death. Even the lazy guy in the mailroom will get more exposure and face time with the bosses than a Teleworking employee. You aren't there. Everyone forgets you even existed. When the bosses are thinking about who to give that new kick ass position to, they will not be thinking of you. Even if you turn in your work on time, and it is awesome, you will not be considered.
This goes for the cherry new projects as well. You aren't there, you won't be called in to the spur of the moment meeting where the project suddenly gets assigned. The only meetings you will be a part of are the ones where you are scheduled to attend and they actually remember to call you.

Dangerous Distractions

Home has the most dangerous distractions. That is where your TV is. That is where your bed is. That is where you keep all of the stuff for that hobby that you love so much. All of that stuff is just a couple of steps away. A distraction at work, is still at work. It doesn't really distract you for long. A distraction at home, will suck away at your time for hours.

Unrealistic Expectations

People, especially managers, will expect more out of you because you are working from home. It is important that you manage expectations so that you can deliver your work on time and with consistent quality.

How to Succeed Working From Home

It sounds stupid, but it is very difficult to do. To succeed working from home, you have to treat your home office like you do your office at work. You have to treat your work time specifically as WORK time, and you have to let your self off of work at the proper time as well. It is just as easy to work too much, when you work from home, as it is to work too little. An example I give is that I used to get up in the morning at about 6am, work until about 7am, go work out, come back home start working again, break just long enough to make a sandwich, work until my wife got home at 6pm, hang out with her for a while, then go back to work from about 8pm to 10pm or later then go to bed. On the weekends, I would get bored, and work for several hours. In the end I was working 80 hours or more a week. That is not good for you.
This is what I did to keep sane while working from home:
  • Create a set work schedule, just as you would in the office. Show up on time and LEAVE on time.
  • Constantly send emails and instant messages out to managers and project leads to make sure I was working on what needed to be worked on, and that I was kept in the loop with all decisions.
  • Cut out all dangerous distractions while at work. Treat work time just as I would in the office. If you have kids or pets, they have to go on the same schedule as they would if you were in the office. They are very dangerous distractions.
  • Time and expectation management is vitally important. Give yourself enough time to do your projects working a normal 8 hour shift. Manage expectations with management to make sure that they are not asking more of you because you are working from home.
  • Make time to do something outside the home at some point during the day. In other words, take a lunch. Go talk to somebody. Go do something. Personally, I go to my gym everyday. I have a group of guys that I work out with, and that I can talk to. It gives me some personal contact and keeps me sane.
  • Embrace the fact that you will not be there for the little things that happen in the office, however make an effort to go to the events that you are invited to for the major events. Birthdays of teammates, office project celebrations, retirements, new employee get to know you lunches, these kind of things help you keep connected to your office and keeps you in everybody's mind.
  • Treat your work time just as you would if you were in the office. If you have to run an errand, let your boss know, just as you would if you had to leave the office.
  • If you are sick, take a sick day. Don't work on the days where you wouldn't go in to the office. Not only do you let your body recover, you let your soul recover as well, and you will turn in your best work.
  • Keep the same grooming habits that you had when you went in to the office.
    • Get up, and take a shower
    • If you shaved everyday going in to the office, shave everyday.
    • I know that women may have an elaborate makeup scheme that they would do for work, you don't have to go that far BUT you should do the minimum makeup. If you have to go somewhere during the day it will cut down on your "get ready" time, AND you still want to keep a routine that prepares you for your work day.
  • Get dressed for work. Do NOT start working in the clothes you wore to bed. Get dressed. It can be more casual than what you would wear in to the office, but get dressed. You want to create some separation between your home life and your work life and getting dressed is a BIG part of it.
Working from home can be very rewarding, but there are many pitfalls. Know that if you fail in your teleworking arrangement, it won't be just you who suffers. One bad apple will spoil teleworking for everyone. Supervisors and management HATE teleworking. It is understandable, as they are controlling type of people and teleworkers are very difficult to control. I have seen entire teleworking programs get scrapped because of a single person who took advantage of the freedom. Do the right thing by everybody and recognize that if you can't resist the dangerous distractions, STOP teleworking.

Allowing remote employees is also a big plus for companies as well. The biggest advantage is that when you are hiring for a position, you no longer have to worry about talent in your specific geographical area, rather the entire world is available for your hiring. Of course, there are time zone considerations to think about. If you are on the east coast and your employee is on the west coast, contact and meeting times have to be specifically mapped out.

Wednesday, December 21, 2011

Configuring FAST Search Server 2010 To Use SSL With A CA Certificate

I had the opportunity to configure a FAST Search Server 2010 deployment in a secure environment. Instructions for configuring SSL for FAST are fairly straight forward, however there were a few gotchas involved.

First, install FAST just as you normally would. Follow these instructions from Microsoft: Configure a stand-alone deployment or a multiple server deployment (FAST Search Server 2010 for SharePoint)
You want to use the FASTSearchCert.pfx self signed certificate that is generated by FAST when using the SecureFASTSearchConnector.ps1 for the first time. Be sure that the user that you use in the -username switch is the SAME user that is running the SharePoint Search Service that you configured when you created the Content SSA. This user also needs to be a member of the FASTSearchAdministrators local group on the FAST Admin and non-admin servers. This is very important!
Also, use the normal http settings for the Administrative Services and the Query Services for the time being. We want to make sure that our connections to FAST work with out incident BEFORE we complicate things by introducing the CA cert.

After you set up your environment, and you have confirmed that everything is working properly, no errors in the SharePoint 2010 event logs, it is time to complicate our deployment by adding the CA signed certificates. The instructions for setting up the use of SSL are a bit vague in places, so I will set down what I did to make everything work.

First things first... Check out the following site: Manage certificates (FAST Search Server 2010 for SharePoint)
As you can see you will need to obtain certificates, all signed by the same Certificate Authority, for any server that is being used. You can really complicate your instillation by changing the DNS alias that is hosting your FAST Search Administration and Resource Store IIS web sites. If you choose to do so, not recommended personally, a specific SSL cert needs to be created and signed by the same CA that is signing the rest of your server certificates then bound to those web sites.

Be sure to complete the section on Replacing the Query HTTPS certificate. It is very important that you have the port correctly configured to use the proper certificate.

One gotcha that I ran in to was that you do need to update the deployment.xml to reflect your usage of SSL for the Administrative services. Be sure to follow the instructions on the following page: Enable Administration Service over HTTPS (FAST Search Server 2010 for SharePoint) Step number four is the one that points you to change the config.xml file.
I like to do one last check here and attempt to access the secure URLs for my services. You should get 403 errors saying that directory browsing is not permitted. What you should not get is the page saying that there is a problem with the certificate. If you get this page, you need to go back to the instructions, and make sure that you have secured everything correctly and that the proper certificates are installed.

Now that everything on the FAST server is taken care of, go back to your SharePoint 2010 Central Administration server. The server certificates for this server should be installed as well as the root certificate for the CA. The certificate used for the SharePoint server needs to have been created specifically for the SharePoint server. You can't just export the server certificate from the FAST server and install it on the SharePoint server. It must be specifically for the SharePoint server created from the same CA as the FAST certificates.
Get in to Central Administration and find your Query Search Service Application properties. Update the Query, Administrative, and Resource Store Service Locations with their secure locations. This is fairly straightforward, to check if things are working correctly, simply click on the Query Search Service Application then click FAST Search Administration on the left. Go through each link on the FAST Administration page and confirm that there are no errors.

Next, open up a SharePoint PowerShell command window. Execute the following cmdlet: Ping-SPEnterpriseSearchContentService -hostName [FASTContentDistributor:PORT] where the FAST Content Distributor is the proper location for your Content Distributor. Don't forget the port number, it is 13391 if you used the default ports.
This handy dandy cmdlet will give you a listing of the installed Personal Certificates on the server will confirm witch one will successfully connect to your newly secured Content Distributor. Copy the thumbprint of the cert that connected and rerun the SecureFASTSearchConnector.ps1 script that you ran before to set up your Content SSA. This time, instead pointing to a specific certificate file, you will be using the thumbprint of your installed certificate, as shown in the instructions from Microsoft.

If all goes well you will get the magic words, Connection to contentdistributor [your content distributor site:PORT] successfully validated.
After that your communications between SharePoint and FAST Search Server will be conducted over SSL. Of course, if you add Content Distributors, or Query Service locations you WILL need to run through the steps of installing certificates and securing those sites just as we did above.

Sunday, December 11, 2011

Google Chrome, User Stylesheets, and Facebook People to Subscribe To Sidebar

I am an avid Facebook user. I really like it. To the point of near addiction... I need help, I really do.
Anyway, recently Facebook has allowed you to "subscribe" to people. What that means is that you can see what they are posting on their walls, but they won't see what you are posting on your wall. Nice for following athletes and other famous people. However, Facebook insists on putting advertisements up for people you should subscribe to. For some reason it kept wanting me to subscribe to various underwear, bikini, and Playboy models. I don't have any of those people as my friends, and I don't visit their web sites, so it isn't a cookie thing. Anyway, I wanted to get rid of it, because it was annoying.

Facebook will not allow you to remove this part of their page from your personal site, so how do you get rid of something like this??? The answer is personal browser stylesheets, or User Stylesheets.

So... What are User Stylesheets. Well, stylesheets are used on web pages to give the pages a uniform look and feel. One example is that you want all text in a table to be boldface. You set up your stylesheet to boldface the text in tables, then apply that stylesheet to all of your web pages. BOOM, all of the text in tables is boldface. Just that easy.
What does that have to do with what we are talking about? You can't change Facebook's stylesheet, so big whoup. Well, modern browsers use what are known as user sytlesheets, or a stylesheet that is installed on the user's computer that applies their stylesheet to all web pages browsed from that user profile. So, say you want all of your text to be red, regardless of what the web page has their set for. You can do that. You can also run specific Java scripts on the pages, to remove any bad words, or whatever.

I use Chrome for my primary browser, so first I needed to configure Chrome to use user stylesheets. That is easily done. All you do is change the Target property on the Chrome shortcut. You add "--enable-user-stylesheet" behind the chrome.exe in the text box. That's it! Chrome is now configured to use User Stylesheets.

I want to remove the People To Subscribe to section, so I have to figure out what that particular section's ID is. Chrome really helps developers out by including developer tool functionality. You just go to Settings then Tools then Developer Tools. I went through the code until I found the element I was looking for. It turns out that the element name is pagelet_ego_pane_w.
Thusly armed, I now need to write the line of code that will hide it forever more... Or until Facebook changes the element's ID... Anyway the code is simple just this: #pagelet_ego_pane_w { display: none }

The element found, and the code to hide it written, I need to add it to my User Stylesheet. I open Windows Explorer and find my user's AppData folder (this folder is hidden, so you will need to set Explorer to show all hidden folders). Once I am there I can go in to the Chrome user style sheet section (c:\Users\YOURUSERNAME\AppData\Local\Google\Chrome\User Data\Default\User StyleSheets\). I opened the custom.css file, and put my code in at the top. Saved the file and restarted Chrome. Tada!!! No more annoying Subscribe To sidebar!! Hooray!!

Friday, December 2, 2011

Authentication Types and Authentication Providers - SharePoint and IIS

I have been having an interesting discussion with my client that has nearly caused my head to explode. The discussion centers around Negotiate, Kerberos, NTLM, IIS, and SharePoint.

A little background first. The client wanted to set up a SharePoint site that could be accessed both by Kerberos and NTLM. SharePoint 2010 only allows for a singe Windows Authentication per zone, so you need to set up two zones for a single web application. One that is configured for Kerberos, and one that is configured for NTLM. Pretty straight forward stuff.
I created a Web Application diagram detailing out the need for these two zones, and was called out by my client. He remarked that if you set your Web Application up for "Negotiate" you can use both Kerberos and NTLM, so the extra zone is not needed.
Wait... What?? Negotiate uses both NTLM and Kerberos?? Eh?? My world had just turned upside down. Kerberos and NTLM are mutually exclusive authentication methods, and can not be mixed together. You are using NTLM, or you are using Kerberos, there is nothing in between. This is evident in Central Administration by having to select either NTLM or Kerberos as your Windows Authentication.

So what is NTLM and Kerberos? Why don't they work together? Well, Kerberos is a token based authentication method that requires all parties involved to be registered with Active Directory, and trusted to use Kerberos. I like to call this connection to AD the Kerberos chain. Why? Well, because everything is registered and trusted in AD, special things can happen. Once a user is authenticated, servers and applications can use that user credential for many authentications. Thus Kerberos can make multiple "hops" without having to re-authenticate the user. It makes a "chain" of authentication.
It works like this... The user attempts to connect to a web site that uses a database back end. The web site is secured with Kerberos. The user is first prompted for their credentials, and authenticated by AD. The user is issued a "token" by the authenticating Domain Controller, called the Kerberos Domain Controller (KDC).
The web site then makes a call to the back end database. The database asks who wants the data, and the web server responds with the user's token. The database says well, the user is trusted by the KDC, the Web server is trusted by the KDC, the web site is trusted by the KDC, I'm trusted by the KDC, and since I trust the KDC, I will send the user the data requested. Everything is chained together by AD, thus the Kerberos chain.
As you can imagine, Kerberos takes some configuration... This can be the tricky part, because all pieces of the puzzle need to be included in AD. That means that the SQL instances and DNS aliases used in IIS need to be registered with Security Principal Names. All servers that host the SQL and IIS instances need to be trusted for delegation in AD. And all users involved need to have their SPNs as well. By default, users get an SPN when they are created in Active Directory Users and Computers, but servers are not trusted for delegation by default.
Here is a GREAT whitepaper on how to configure Kerberos for SharePoint.
NTLM is a simple challenge response authentication method that only requires the user to be registered in AD. Because of this, NTLM can not make the multiple security hops that Kerberos can make. So for each additional hop that is required by the application, the user will need to re-authenticate, or some other trusted credential needs to be used.

Getting back to the story... As you can imagine, the situation set off a flurry of emails, me trying to explain that you cannot do such a thing, and the client insisting on he has done this impossibility in his test environment. So, like my Physics professors told me many years ago, when things don't make sense, go back to the scientific method. I didn't do that...
Instead of having my client describe his environment in excruciating detail, I tried to come up with ways in which he could be thinking NTLM and Kerberos could be working in concert. I asked him if he was talking about his IIS settings, which could indeed be set to be open to using Kerberos and NTLM.

Back in the good old days of the IIS Metabase, there was a property called NtAuthenticationProviders. You could use this property to explicitly state which Windows authentication method you wanted to use. In the IIS 6 days, if you wanted to use Kerberos, you needed to make sure that the "Negotiate" value was set. If you wanted to use both, you set the property to Negotiate,NTLM. (If you want to know more about setting this property in IIS 5 and 6 click the link.)
In IIS 7 Microsoft changed IIS fundamentally. Instead of the proprietary Metabase, an XML file is used for configuration. This file is found by default in the ApplicationHost.config file at %SYSTEMROOT%\system32\inetsrv\config.
In that file there is a property called windowsAuthentication, and under that property is the providers property. In that area you can see the values for Negotiate and NTLM. (If you want to know more about setting this property in IIS 7 and 7.5 click the link.)

This is where things can get tricky... SharePoint is nothing more than an IIS hosted .NET application. IIS is a service on the Windows server platform. Windows, by default, uses Kerberos as its authentication method. Therefore, if your Kerberos chain is configured (all servers trusted for delegation and SPNs for all DNS aliases, SQL instances, as well as users) IIS will authenticate the user using Kerberos DESPITE the SharePoint web application being set for NTLM. User impersonation will not be used, but the initial authentication will be Kerberos. A check of the Windows Security logs will confirm this.

So, knowing all that I thought that perhaps my client had his SharePoint web application authentication method set for NTLM, but was seeing Kerberos in his security logs. Not the case. A screen shot later proved that he was indeed configured to use Kerberos.

So now I finally get smart and start to apply the scientific method. I ask for a complete description of their environment and what evidence he had to say that he was being authenticated via NTLM. And the truth finally emerged.
He was creating his web application in his intranet. He would then VPN in to the network from an outside network, and then connect to the site with his browser. He was prompted for credentials in a standard NT challenge response window. It was this window that he was calling NTLM. He thought that Kerberos needed to include the client's computer in the Kerberos chain, and that Kerberos could only be configured if the user's browser was configured to pass the user's logged on credentials. This, of course is false. Only the user needs to be authenticated, and it is the SECOND computer in the chain that needs to be trusted for delegation. The first hop you get with just the password. Because of this, ANY user can be used as the impersonated user, as long as they are registered with AD. It is how you can change the logged on user in SharePoint. Because his Kerberos chain is in tact, Kerberos can be successfully used as the authentication method.

And with that, the mystery was solved and all was well with the world. A second zone was not needed for NTLM, because Kerberos could be used. Despite my diagrams and explanations, my client STILL can not get his head around the fact that NTLM is not being used at any point. He thinks that if the challenge response window pops up, you are in NTLM's grip...

The moral of this story is to use the scientific method first to solve issues, rather than attempting to prove someone wrong, and that you are the smartest guy in the room first... Goes better for client relations too.

Monday, November 14, 2011

Installing FAST Search Server 2010 for SharePoint, Gotchas, That'll Get 'Cha!

My client has a need for FAST Search Server 2010 for SharePoint in their SharePoint 2010 farm. So, I went about installing it... Two tickets with Microsoft and three weeks later FAST is installed and configured. I think I ran in to every strange exception and gotcha that FAST has, and I haven't even started to configure it in SharePoint yet. Wow. This was an interesting one, especially since the install and configuring went so easily in my R&D environment. The install at the client site, absolute nightmare.

If you want the breakdown on how to install FAST, your best bet is to hit up Microsoft's instructions. They are very complete and go over everything that you need for any type of deployment: MSDN Instructions I'm not really going to go in to the install here. I might do a post on the deployment.xml file, but Microsoft does better with their install instructions than I could. However, if you are looking for the weird stuff, and the gotchas, you have come to the right place.

First you need to have Windows Server 2008 R2 64bit. FAST won't install on anything else. I chose to update the servers with all of the latest service packs and updates before beginning my install. This is a good idea to do, just because if you run in to a problem and you need to call Microsoft, the engineers there will likely insist on you updating your OS first. Updating first gets all of that noise out of the way.
My client wanted to have a fault tolerant system, so my architecture included two servers. In FAST terms that meant that I would have one "Admin" server, the server that would run the Administration service, and one "non-admin" server, a server that was part of the FAST cluster, but did not run the Admin service. Only one server in the FAST farm can run the Admin service.
In creating the multi-server deployment you need to create a file to tell FAST what services will be running where. This is a simple xml file that is referred as the deployment.xml. More on that little terror later.

Similar to SharePoint, you install the binaries on to your FAST servers then run a configuration wizard to configure the farm. That part is as simple as three or four clicks. Not a lot is done by the install program other than to do the usual file move, registry updates and assembly deployments. The configuration is where the real action happens. After installing FAST's binaries, I downloaded and installed FAST's service pack. This is a recommended step by Microsoft and just makes good sense. Why not do the service pack install before you configure your deployment? I don't see a valid reason why not. So that is what I do. For installs, I like to turn off the Windows Firewall so that I don't have any trouble with that service blocking any ports that need to be open for the install and configuration to work. After configuration is complete, I add rules in to the firewall for my newly installed services, and turn it back on. So, fortified by my experience with EVERY SINGLE Microsoft program to date and how they dealt with the Windows Firewall, I switched it off and wen to start the configuration.

PowerShell Requirements Now, FAST's configuration wizard is basically just a user interface that passes what it gathers and validates to a PowerShell script that actually does the configuration. In order to run PowerShell scripts, you first have to run a quick command in PowerShell to tell the server that it is OK to run PowerShell scripts. You can run individual cmdlets until your are blue in the face, but once you try to run those same cmdlets from a ps1 file, you get a nasty error. Sooooo, on gotcha that I managed to avoid right off of the bat is that I always set my servers to be able to run PowerShell scripts during their initial Windows configuration. Check out the Set-ExecutionPolicy cmdlet for more information.
I run a lot of my own scripts, and I never sign them, so I always set my policy to Unrestricted. It is a little bit of a security risk, but I mitigate that by setting the policy to AllSigned after I have completed my installation. All scripts that will be run on a regular basis should be signed and taken through the normal configuration management policy of good software design.

Service Account Requirements
The account requirements are kind of misleading, and you really need to be careful with them, ESPECIALLY if you are in an environment that is heavy handed when it comes to GPOs. I chose my Admin server and start to run the configuration wizard on that server. The first thing it asks is to enter in the username and password of the account that will be running the FAST windows services. I had prepared for this by making sure that the service account was a domain account, that FAST uses had rights it needed on the database server to create and configure databases, and I made sure that the account had the minimum rights for an account to run a service (log on locally, log on as a service). So, after I enter in the username and password, I get a validation error saying that the account is invalid. What?

I double check the hardware and software requirements and find that all I need is an account with log on locally, and that the install account is an administrator on the server.
The install account MUST be a member of the local Administrators group. This is a hard requirement, the script does a validation check. The goofy thing is, that if the install account is not a member of the Administrators group it is the validation on the SERVICE account that fails saying it is an invalid account. This was a HUGE gotcha for me.
So, after I added that account to the administrators group, I was able to get past that validation error.
The other minor gotcha here is that the account that the install script uses to do the database work is not the install account, it is the service account. So, before you begin the FAST configuration, be sure to grant the service account at least dbcreator rights. I like to set the account that is doing the database work to SA during the time of install. That way any scripts or whatever the install wants to do it can do on the database server. If you do this, be extra careful that you remove the service account from the SA role IMMEDIATELY after completing the install!!

Disjointed Namespace
The next section of the configuration wants you to enter the Fully Qualified Name of the server in to a text box. This I do, FASTAdmin.SharePoint.MyClient.com. Blam! Validation error: Please enter in a valid computer name. After going through the normal, spell checks and whatnot I found that I did indeed have a valid computer name. I went to another server in the same subnent, and pinged my Admin server to see if I got proper DNS resolution. I did. So... Whiskey Tango Foxtrot?
When all else fails... Read the log files. So I go to the log files and find that my LDAP look up is failing on the server FQN. What?? I learn that this might have a problem if your DNS name space is disjointed. What does that mean?
Well... Say you are in an environment that originally set up in a UNIX, or some other type of network that does not support Directory Services and DNS integration. To segment your network in to logical units, the network engineer used DNS to designate the divisions of the domain. This is done easily enough by adding prefixes in DNS through BIND commands. When everything is done you have a nicely segmented namespace with each division having prefix telling you exactly what division owns what computer. The FQN of the computers ends up being COMPUTERNAME.Division.MyClient.com.

Enter Active Directory. Active Directory uses a protocol called Lightweight Directory Access Protocol. It also tightly integrates with DNS. When configuring AD, it is recommended that it is implemented to closely mirror your DNS environment. So, for each DNS segmentation, AD should be segmented with a child domain. This is to ensure that LDAP resolution and DNS resolution both can occur. HOWEVER, Windows has some boxes you can check and settings you can hack so that you can have a single domain, yet keep your segmented DNS environment. This is called a disjointed DNS namespace. It causes problems...

Enter in Windows Identity Foundation. WIF is new in the Windows world, and what it does in introduces claims based authentication. SharePoint 2010 and FAST use claims extensively. FAST uses WIF for all of its authentication, and it just so happens that WIF uses LDAP exclusively to validate computer names. What happens if your LDAP and DNS do not match? Your LDAP query fails and you can't install FAST...
In my log file I see exactly that. The LDAP query that is failing, CN=FASTAdmin DC=SharePoint DC=MyClient DC=com does not exist because there is no CN=SharePoint. The SharePoint part of my FQN is simply a DNS prefix, not an actual child domain. I would have to fix this problem before I could move on.

I am happy to learn that the FAST team and Microsoft have addressed my very problem, and have fixed it in FAST Search SP1. So I download that guy and find the setting in the new psconfig.ps1 file that needs to be updated (set $disjointNamespace = $True) . I save the file and run it... only to see the exact same error pop up in my log file... Grrrrrr....

As far as I can tell with my work at the client site, and my R&D work, FAST simply will not work with a disjointed namespace. It failed every time I tried it. So, to fix the issue, the DNS prefix was removed and the FQN of the server was set to FASTAdmin.MyClientDomain.com. If you know how to get FAST running in a disjointed namespace, let me know!!

After the move, I no longer saw that particular exception in my log file... New and exciting exceptions awaited me!

FIPS Encryption The next exception encountered was a lot easier to solve... technically speaking. The solution, however kicked off a political firestorm that rages to this day. Anyway, this blog is not about office politics...
So, in the logs the exception is:
"This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms"


This one is easy to solve. Simply open the registry and navigate to hklm\system\currentcontolset\control\lsa\fipsalgorithmpolicy and flip the bit to 0. Close the registry manager and that is all there is to that one.

Microsoft Distributed Transaction Coordinator (MSDTC)
MSDTC is required for FAST to run. There is a part of the install script that will attempt to install this service if it is not installed. BUT if you have a GPO that blocks MSDTC from installing you will get the following exception:
Error Unable to get a handle to the Transaction Manager on this machine. (8004d01b)

This one requires that you change your Group Policy settings, then confirm that DTC is installed properly. Next go in to the properties and make sure that the check box for Allow Remote Clients on the security tab is checked. This is important if you are installing a non-admin sever later.

Firewall
Now things get really funky. Microsoft is notorious for adding their firewall product to their Operating Systems, then having their applications completely ignore that a firewall is even there. This leaves the user to scratch their head and wonder why they can't connect to anything on their computer. So, a common process is to disable the firewall completely, do the installs, configure the ports that are needed to access the application, write firewall rules for them, THEN turn the firewall back on. FAST Search Server breaks from this mold... FAST actively looks for the firewall, confirms it is on, then writes its own firewall rules. If the firewall is not detected, yup you guessed it, an exception is thrown and the configuration fails. You can turn the firewall off when after FAST is running, but during the install you must have the firewall turned on. If not, the configuration will not continue. Major headache.

PSConfig.ps1 Problems The final problem I ran in to on the configuration of the Admin server was that during the execution of the cmdlets that create the Administration Database, the script would somehow loose the FAST PowerShell snap ins. I don't know why, I don't know how. It seems to be a unique problem only to this client's environment, however I thought I would include it here, just in case somebody else is getting this exception and does not know how to clear it.
The following exception kept popping up during the configuration:
Exception- at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)

If you poke through the PSConfig.ps1 script you can see that when this method is called, the install script spawns a new PowerShell instance. For whatever reason it was when this instance was spawned that my FAST snap ins would be lost.
What I did to clear it was to trace back to where the method was being called, it was in another file called commontasks.ps1, located in the %FASTInstallDirectory%\installer\scripts\include folder. At line 2118, I added a little logic to detect if the FAST snap ins were loaded, and, if not, load them:
If((Get-PsSnapin|?{$_.Name -eq "Microsoft.FASTSearch.PowerShell"})-eq $null){$PSSnapin = Add-PsSnapin Microsoft.FASTSearch.PowerShell -ErrorAction SilentlyContinue | Out-Null}
This code will ensure that the snap in is loaded and the configuration can continue. What I found out later was that because of this problem, my database did not get created. Not really a big deal, because the configuration of FAST doesn't write anything to the database it just configures the FAST services' XML files and other such things. After the configuration completed, a database will need to be created for FAST to use. It must be created before you do any other configuration, such as adding a non-admin server or connecting FAST to the SharePoint farm. Fortunately, it is very easy to do, provided you use all of the same database settings that you passed to the PSConfig.ps1 script.
Run the following in a FAST Administrative PowerShell instance:
Install-FASTSearchAdminDatabase -DbName YOURADMINDATABASENAME -DbServer YOURDBSERVERNAME -Force

It is important to realize as you run this particular cmdlet, that it will run as the account that you are logged in as. So, be sure to run it using an account that has at least dbcreator on the database instance.

After all of these problems, my Admin instance of FAST Search Server 2010 for SharePoint was complete! My problems were over right?? WRONG!!

Deployment.XML File and IPSec Requirements
I'll tackle these two issues together, because they are closely related, and throw the same types of exceptions... Grrrrrr...
After the pain and suffering of installing the Admin server, the non-admin server should have been a walk in the park. I knew all of the gotchas, and I was able to avoid them during the binary install, and most of the server configuration. But as soon as I attempted to configure the non-admin boxes, problems started to pop up.
After running for a good amount of time, the configuration would fail, and the following exception would be in the log:
%FASTInstallDirectory%\bin\MonitoringServiceConfig.exe" Output - Error: The file %FASTInstallDirectory%\etc\middleware.cfg' was not found.

What??? Can an exception get more cryptic? Why not just throw "Object reference not set to an instance of an object"? That would be equally as useless, and appropriate at the same time! OK, rant over. This post is too long to subject you to my feelings on Microsoft's exception handling.
What happens during the configuration of a non-admin server is that the non-admin server will attempt to make an IPSec connection to the admin server, and download a series of files that configures the services on the non-admin box. In this way you can set up many non-admin servers quickly, with only having to enter in configuration data once. Sounds great right? Right...
The problem is that the configuration script does not check to see if this whole download procedure completes successfully. It happily chugs on to validate if the files that were supposed to have been downloaded exist. If they don't exist, then the script blows up, and you get the idiotic, meaningless exception above...
So, again PowerShell to the rescue. You can attempt to run the IPSec connection and file download using a PowerShell cmdlet. The good news with this cmdlet is that you will get a MUCH better exception if it fails.
The cmdlet is as follows:
Set-FASTSearchIPSec –create
The script will prompt you for a username and password, use your FAST service account.
The cdmdlet would chug a long for a bit, then produce the following exception:
An error occurred while configuring IPSec - Could not connect to the admin node.
This may be because of,
  1. Invalid admin node name
  2. Invalid baseport. Baseport of admin node and non-admin node must be same
  3. Admin node is not up and running
  4. Missing IPSec rules on admin node. If you added this host to the deployment.xml after running this script on the admin node, you need to rerun the IPSec cmdlet on the admin node
Awesome... What if, as in my case, all of that stuff is good? What if when you run this same cmdlet on the admin node everything is awesome??? You have to look at the underlying technology to figure out what is up.
What is the underlying technology? IPSec. What is IPSec? Internet Protocol Security. On Microsoft Operating Systems, what does everything that uses the Internet come down to? Internet Explorer. I freaking E.
Here is what is going on. IPSec and IE use the same connection settings, for some goofball reason, configured in IE. So if you go in to IE's Internet Options, Connections Tab, and click on LAN settings you see a little check box that toggles if IE automatically detects the Internet connection settings. What this is really doing is sending out a broadcast to see if an Internet Proxy Server responds. If it does, IE uses that Proxy server to connect to the Internet. If it detects nothing, the process will time out and IE will happily connect directly.
If this box is checked, IPSec will attempt to connect to the Internet the exact same way as IE... Only IPSec doesn't handle the time out like IE does. If IPSec does not detect a Proxy server, IPSec up and fails. Fun, huh?
You clear this problem by unchecking the Automatically Configure Settings box, or, if you are using a proxy server, manually inputting your proxy server settings.
This kind of crap is why people hate Microsoft so much. If you integrate everything, fine integrate EVERYTHING. Don't integrate half, then just quit!! It is frustrating as all outdoors when you run in to a problem like this. Why would you check in IE for an IPSec issue?????? It makes zero sense. It is like checking your MS Paint setting so that your video card will work correctly.

With that exception cleared we are good to go, right? Wrong... Run the configuration script again and... same error. Repressing the urge to go on a multi-state shooting spree, I run the Set-FASTSearchIPSec cmdlet again to see what is happening now. A new exception now graces my screen:
XML Validation error: Data at the root level is invalid. Line 1, position 1.
Really??? Really??? What XML file???!!!???
One of the files downloaded is the deployment.xml file that you configured and put in to your Admin server configuration. This file tells the configuration script what your indexing configuration is going to look like, which servers are running what service, etc. But there is a little catch. If you try to use Visual Studio to create your XML file, Visual Studio will create the file using an encoding method that FAST doesn't like, and it will add a single bit character to the very beginning of the file. That will cause everything to go kerplow!! Stupid, right? Yup!! Sure is. Can't use Microsoft's products with... Microsoft's products.
So how to clear this exception?
First you need to go back to your Admin server. When you get there you need to navigate to %FASTInstallDirectory%\etc\config_data\deployment and find the configuration.xml file. Open that guy up and copy everything but the encoding statement at the top of the file. If you don't have an encoding statement at the top of the file, get everything.
Rename deployment.xml to something else... Microsoft_Is_Stupid.xml sounds good to me.
Open up Notepad and paste everything in the new file that it creates when the program is stared. Save the file in the same location as the other file, and name it deploment.xml.
Now open up a FAST Admin Shell and run Set-FASTSearchConfiguration. Or restart all of the FAST Services, or reboot, whatever...
After all that is done and the Admin server is back up, run the Set-FASTSearchIPSec cmdlet again. Confirm that it completes successfully. When it has, re-run your configuration script. It should complete successfully this time.

I realize that a lot of these problems come from FAST being a newly acquired software package. I know that the developers on the FAST side of things are working to come in to the Microsoft fold, and that this version of FAST is a "1.0" product. I realize that my client's environment was unique considering their security zeal. This eases the pain of the install a bit, but what really grinds my gears is the absolute lack of documentation of these issues. I had to search blog after blog after blog to find out what was going on both under the hood and with the exceptions. Microsoft provided very little in terms of providing support. Sure, I learned a TON about how FAST works and all of the moving parts that go along with configuring it, but I paid for that with the stress and absolute agony of this install.