Monday, November 18, 2013

Moving Folders to Managed Metadata - Keeping the Folder Taxonomies of Lists

I started a new client. They have had a SharePoint deployment for a while, and, because of the natural mistrust that IT people have for their users, they have not implemented a distributed Administration model. Users have the ability to add files and folders to lists and libraries, but do not have the ability to create new sites or libraries.

When a company does this, what happens is that the users recognize the power of lists and libraries, but they are frustrated with the lack of freedom they have to organize and categorize their data. So, what do they do? They create folders. Lots and lots and lots of folders. When you only have the "Shared Documents" library to work with, and all sorts of different files to share, some organization is better than none, and, without guidance, folders are a familiar and easy to use organizational tool.

But what happens when users are left to put folders in place unchecked??? You get DEEEEEEEP hierarchies of folders. Data becomes difficult to find, files get duplicated, AND you can run in to filenames that are too long for SharePoint to manipulate. Bad news.

I am in a situation where I need to remove all of the folders from my document libraries, preserve the folder taxonomy, maintain the ability to see only what is in a specific folder, and make it easy for users to add new documents to the library using their familiar folder taxonomy.

The answer, of course, is to import the folder taxonomy in to the Managed Metadata service, then add a Managed Metadata column to that list. The only other wrinkle that I have in this project is that I do not want to publish these folder taxonomies out to the rest of the enterprise. I actually want to get rid of them at a later date. So, I need to create LOCAL term sets rather than GLOBAL term sets. What's the difference? Well, the local term set, while existing in the global Managed Metadata Service, is only available to the site collection within it resides, via a Group who's IsSiteCollectionGroup property is set to true. The global term set, of course is available to anyone who subscribes to the Managed Metadata Service.

Now, you COULD accomplish all of the above requirements by simply going to the library, add a new Managed Metadata column, click on Customize you term set, and populate the term set manually. Then open each folder and apply the proper tag to each file in the folder. Then open the list in Explorer view and copy everything to the root, deleting folders on the way. That works great for small sets, but I have libraries with over 50,000 items in them and hundreds of folders. I need something to automate the process.

So... First things first. I need to go to the list and get all of the folder names, but I need to maintain the taxonomy. How do we do this?? We first need to know a little something about the SPFolder class. What is great about the SPFolder class is that it has a property collection called SubFolders. This gives a listing of all of the the child subfolders that are contained within the original folder. That makes our job easier. What makes our job harder is that each child folder has its own SubFolder property, and the child below that, and the child below that until the end of time. Whew... How do we pull all of these folder names out?? We have to create our own object that has a nested list, and then we use something called a recursion method to pull everything out.

What is a recursion method? Put simply, a recursion method is a method that calls itself. They are very useful in situations where you have to get the factors of something or if there are many nested elements within an object, such as a SPFolder object or a SPWeb object. This type of nested object is very very common in the SharePoint world, as well as everywhere else.

So first we create our custom object. Since we are only looking for the names of the folders, we only have to deal with strings.
public class FolderObject {
        public string folderName { get; set; }
        public List subFolders { get; set; }
    }

Pretty simple, right? All we have is a string with a generic list of the object that we just created. This gives us the same structure as the folder name with its child subfolder names.

Now, how do we get all of the folders within a list? Easy, we do a CAML query. BUT, we have to be a bit careful. We want to EVENTUALLY get all of the folders in the entire list, but we want to maintain the taxonomy, so we must do a NON recursive CAML query. Otherwise our query would return all of the folders in the list on the same level. We don't want that. We just want the folders in the top most level so that we can walk down the folder hierarchy train.
Remember that when dealing with SharePoint lists, you only have one type of object chat can be within, SPListItem objects. So, when you do the query, the output will be a SPListItemCollection.


public List GetAllFolders(SPList list) {
            List folderNames = new List();
            SPQuery query = new SPQuery();
            query.Query = "1";
            SPListItemCollection items = list.GetItems(query);
            foreach (SPListItem item in items) {
                SPFolder itemFolder = item.Folder;
                folderNames.Add(GetFolderTaxonomy(itemFolder));
            }
            return folderNames;
        }

No problem so far. This little bit of code gets us the root folders in the list. After we have the SPListItemCollection, we run a loop on them to obtain the SPFolder item associated with the SPListItem. Now comes the fun part. We send the folder object to our recursive method, GetFolderTaxonomy.

public FolderObject GetFolderTaxonomy(SPFolder folder) {
            FolderObject folderObject = new FolderObject();
            List subFolderList = new List();
            folderObject.folderName = folder.Name;
            if (folder.SubFolders.Count > 0) {
                foreach (SPFolder subFolder in folder.SubFolders) {
                    FolderObject subFolderObject = GetFolderTaxonomy(subFolder);
                    subFolderList.Add(subFolderObject);
                }
            }
            folderObject.subFolders = subFolderList;
            return folderObject;
        }

Here we first create a FolderObject for our folder. We then add the name of the folder as the folderName string property. Next, we check to see if the folder has any child folders. If they do, we create a FolderObject for the child and then send the child SPFolder object to the GetFolderTaxonomy method. So on it goes until the very bottom of the taxonomy is reached, where the SubFolders.Count equals zero. Then the folder name is added to its FolderObject, and that object, with a null List property is added to its parent's List. So on and so forth up the taxonomy chain until all folders have been added.
They don't call recursive methods "brute force" methods for nothing...

Now we have a List that contains the root folders of the list, and contained within their own List are their child folders. We can move on to the next part, adding a local Managed Metadata term set, and creating a term for each folder.

Doing anything in the Manged Metadata Service requires us to reach out and get a series of instances. The Managed Metadata Service is a hierarchical service, not unlike how folders work. There are many differences, but, for simplicity's sake, think of the Managed Metadata Service like a folder structure. You need to get in to the top folder before you can get in to the child folders. The Managed Metadata Service is set up like this:
Term Store
Taxonomy Group
Term Store
Terms

When you are wiring up your code, you have to make sure you have an instance of the parent before you can adjust the child.

First, you need an instance of The TaxonomySession that is associated with the SPSite. Think of this like creating the Client session of a WCF service... Because that is what you are doing...
Then, you reach out to the TermStore that contains or will contain your Taxonomy Group. The TermStore is going to be associated with the Managed Metadata Application Service. Unless you have a very large deployment of the Manged Metadata Service, you will only have one term store, so it is reasonably easy to get ahold of.
Next, you want to either get a hold of, or create, the Taxonomy Group that will hold your Term Set. In this case, our Taxonomy Group is going to be special. We want a group that is associated with our Site Collection and our Site Collection only. Typically, we would call the CreateGroup method on the TermStore object to create a new Taxonomy Group, but we don't want a global Group, we want a local group.

Back in the early days of SharePoint 2010, creating a local term set programmatically was a big deal. The method to create it was internal, and could only be accessed via reflection or playing around with some of the Publishing Features to make it happen. Now, what we do is call the GetSiteCollectionGroup method of the TermStore object. You can't have more than one local Taxonomy Group. So, this method will check to see if there is a Site Collection Taxonomy Group. If there is, it passes on that instance, if there isn't it creates it.

Now that we have our Taxonomy group, we can create our term store. Since I want a Term Store that is unique to my folder structure in my list, I am going to create a new one with the name of my list. After that we simply create the terms in the same taxonomy as our folders.

Again, each Term object has its own Terms collection property, just like the SPFolders had their own SubFolders collection property. We use the same recursive method technique to pull our folder names out of our List and in to Taxonomy Terms

As always when using the Manged Metadata objects, you need to reference the Microsoft.SharePoint.Taxonomy.dll and add a using statement for Microsoft.SharePoint.Taxonomy.

public TermSet PopulateTermSet(List folderNames, SPList list, SPSite site) {
            TaxonomySession taxSession = new TaxonomySession(site);
            TermStore store = null;
            TermSet listTermSet = null;
            if (taxSession.TermStores.Count > 0) {
                store = taxSession.TermStores[0];
                Group localGroup = store.GetSiteCollectionGroup(site);
                try {
                    listTermSet = localGroup.TermSets[list.Title];
                } catch { }
                if (listTermSet == null) {
                    listTermSet = localGroup.CreateTermSet(list.Title);
                } else {
                    listTermSet = localGroup.CreateTermSet(string.Format("{0}-{1}", list.ParentWeb.Title, list.Title));
                }
                foreach (FolderObject folderName in folderNames) {
                    Term term = null;
                    try {
                        string normalizedName = TermSet.NormalizeName(folderName.folderName);
                        term = listTermSet.Terms[normalizedName];
                    } catch { }
                    string termName = string.Empty;
                    if (term == null) {
                        termName= folderName.folderName;
                    } else {
                        term = null;
                        termName = string.Format("{0}-{1}",list.Title, folderName.folderName);
                    }
                    term = listTermSet.CreateTerm(termName, 1033);
                    if (folderName.subFolders.Count > 0) {
                        PutSubTermSet(folderName.subFolders, term);
                    }
                }
                store.CommitAll();
            }
            return listTermSet;
}

public void PutSubTermSet(List folderObjects, Term rootTerm) {
            foreach (FolderObject folderObject in folderObjects) {
                Term term = null;
                try {
                    string normalizeName = TermSet.NormalizeName(folderObject.folderName);
                    term = rootTerm.Terms[normalizeName];
                } catch { }
                if (term == null) {
                    term = rootTerm.CreateTerm(folderObject.folderName, 1033);
                }
                if (folderObject.subFolders.Count > 0) {
                    PutSubTermSet(folderObject.subFolders, term);
                }
            }
}

In the code you see a lot of try/catch blocks with empty catch blocks. This is because we need to check to see if there is a Term that is already named the same as another term on the same level. If there is, we want to change the name of the term to be more descriptive as to its location. The wrinkle in this code that should be noted is the use of TermSet.NormalizeName. When a Term is created, any special characters and whatnot are stripped out of the name. So when looking for Terms of the same name, you need to make sure that you are searching with the proper string. The TermSet.NormalizeName does that for you.

That's all there is to creating a local Managed Metadata Term Set for a List and transposing the folder taxonomy in to the Term Set. From here it is just a matter of adding the proper term to the proper file.

For this you need to first add a Managed Metadata Column to the list, and associate it with the Term Set you created for that list. This is very easy and I have covered it in other blog posts.

public bool AddTermSetToList(SPList list, TermSet termSet) {
            try {
                TaxonomyField locationField = list.Fields.CreateNewField("TaxonomyFieldTypeMulti", "File Location") as TaxonomyField;
                locationField.AllowMultipleValues = true;
                locationField.SspId = termSet.TermStore.Id;
                locationField.TermSetId = termSet.Id;
                list.Fields.Add(locationField);
                list.Update();
                return true;
            } catch {
                return false;
            }
        }

Then, you must iterate through each SPListItem in the list, adding the proper term to the newly created File Location TaxonomyField. This is relatively easy, find the SPFolder property for the SPListItem (SPListItem.Folder), then obtain the SPFolder's Parent Folder Name property (SPListItem.Folder.Name).

public bool ApplyTermsToItems(SPList targetList, TermSet listTermSet) {
            foreach (SPListItem item in targetList.Items) {
                string folderName = item.File.ParentFolder.Name;
                if (!string.IsNullOrEmpty(folderName) && targetList.Title != folderName) {
                    TaxonomyField locationField = (TaxonomyField)targetList.Fields["File Location"];
                    Term term = null;
                    try {
                        term = listTermSet.Terms[folderName];
                    } catch { }
                    if (term == null) {
                        foreach (Term innerTerms in listTermSet.Terms) {
                            term = GetTerm(folderName, innerTerms);
                            if (term != null) {
                                break;
                            }
                        }
                    }
                    if (term != null) {
                        string termString = string.Concat(term.GetDefaultLabel(1033), TaxonomyField.TaxonomyGuidLabelDelimiter, term.Id);
                        TaxonomyFieldValueCollection taxonomyCollection = new TaxonomyFieldValueCollection(locationField);
                        taxonomyCollection.PopulateFromLabelGuidPairs(string.Join(TaxonomyField.TaxonomyMultipleTermDelimiter.ToString(), new[] { termString }));
                        item["File Location"] = taxonomyCollection;
                        item.Update();
                    } else {
                        return false;
                    }
                }
            }
            return true;
        }

Because we are, again, dealing with a hierarchical structure, to find the proper terms, you must iterate through the terms:

public Term GetTerm(string termName, Term rootTerm) {
            Term theTerm = null;
            if (rootTerm.Terms.Count > 0) {
                foreach (Term innerTerm in rootTerm.Terms) {
                    if (innerTerm.Name != termName) {
                        if (innerTerm.Terms.Count > 0) {
                            theTerm = GetTerm(termName, innerTerm);
                        } else {
                            continue;
                        }
                    } else {
                        theTerm = innerTerm;
                        break;
                    }
                }
            }
            return theTerm;
        }

From here, we move our files to the root folder of the list.

public bool MoveAllFiles(SPList targetList) {
            foreach (SPListItem item in targetList.Items) {
                if (item.File.ParentFolder.Name != targetList.RootFolder.Name) {
                    string sourceurl = item.File.ParentFolder.Url;
                    string targeturl = targetList.RootFolder.Url;
                    string fileName = string.Format("{0}/{1}", targetList.RootFolder.Url, item.File.Name);
                    item.File.MoveTo(fileName, true);
                }
            }
            return true;
        }

Finally, we delete all of the folders.

public bool DeleteAllFolders(SPList targetList) {
            SPListItemCollection folderCollection = GetAllFoldersToCollection(targetList);
            for (int i = folderCollection.Count - 1; i >= 0; i--) {
                folderCollection.Delete(i);
            }
            return true;
        }

Now you have your same list, but without folders. I also adjusted the list's properties to disable folder creation, and added Metadata Navigation. Now my users can navigate through their folder structure, but not have to deal with the problems of having file names too long.


Tuesday, March 19, 2013

Generics Can Be a Pain...

I am working on a little program for my client and I am using Sytem.Collections.Generic.List<T>s. Since most of the time I talk about SharePoint I want to make it clear what kind of list I am talking about. I'll complicate things later. ;-)

Anyway, I have two lists, in this case both lists are of type List<SPUser>. One list represents a group of SPUsers that I want to assign tasks to in a task list. The other list represents the users who already have tasks assigned to them.
My requirement is that I can not duplicate users. So, once a user has a task assigned to them, I don't want to assign a new task to them in the same list.

So, I have these two lists of the same type. I need to exclude the users from one list who exist in the other list. Simple!!, someone calls, just use List<T>.Exclude! No muss no fuss!! BZZZZZZZZZZZZZZZ Wrong answer. List<T>.Exclude will only exclude objects that are EXACTLY alike. That means that they have to come from the same instance. Essentially, if the two objects are not in the exact same memory space, Exclude will fail. This is not very intuitive, because we with the meat memories say, well they are the exact same object so it should work! The computer says, nope, the first one is from instance A, occupying memory block A, and the second one is from instance B, occupying memory block B. B!=A therefore NOT THE SAME. It really really really sucks.

Ok, we accept that life is hard and we have to do some thinking in order to get what we want to happen. So how can we do it? We could use foreach loops to iterate through the lists.

List<SPUser> thridList = new List<SPUser>();
foreach (SPUser addUser in addUserList) {
  foreach (SPUser listUser in taskListUsers) {
     if (addUser.LoginName != listUser.LoginName) {
         thridList.Add(addUser);
     }
  }
}
That is a lot of looping. Is there a better way to do this? Fortunately, there is. We can run the List<T>.RemoveAll method with a little bit of LINQ logic to essentially create the dual loop thing above.

addUserList.RemoveAll(u => taskListUsers.Any(tu => u.LoginName == tu.LoginName));

Encapsulated in this one line is the entire mess above. To read this we need to know a little about how LINQ works. If it looks confusing, you really need to learn about Lambda Expressions and Anonymous Functions.
For our purposes here, the "=>" means "WHERE", just like in a SQL query. So we are saying that we wan to RemoveAll objects in our list WHERE (=>), the following expression evaluates to TRUE. That's not really what is happening, but for this discussion it works.
From here we want to make sure we go through the entire second list. So we use the List<T>.Any() method. This method works just like a foreach loop, only the method knows that you want to go over all of the individual objects in the list.
The tricky part with the List<T>.Any() method is that it returns a bool. Therefore, what calls this method must be prepared for that. Our code is looking for a "TRUE" evaluation, so we are good to go.

Looking in to the expression inside the List<T>.Any() method, we will return true for any object that has the same SPUser.LoginName as the addUserList object item LoginName. Since it returns true, that particular item will be removed. The RemoveAll() method is already set up for negative logic, UNLIKE our looping example above. Be careful about that... When I psudocoded out the loop solution to figure out how to get the data I wanted, I did get caught by removing all of the items that I didn't want removed.

What we are left with at the end is the addUserList without any of the items who's LoginNames are the same as any of the LoginNames of the items in the taskListUsers.

Phew! A pain to be sure, but when you are using List<T> with complex types, you have to be very sure on how to manipulate your lists to get the correct data out.

Tuesday, March 5, 2013

Modal Dialogs, ECMAScript, and Client/Server Interaction

I built a little Modal Dialog Application Page, launched by a Ribbon button, that would take what a user wrote in a text box and send it in an email to all of the "Assigned To"s in a task list.

My client wanted the ability to check the boxes next to the tasks, and have my Modal page send a message to just those users who were checked. Fairly simple, right? Not so much. You see, the ribbon button is controlled by SharePoint 2010's ECMA Script, where my "Notify" page is controlled by .NET managed code. So... How do we transfer JavaScript ECMA script to .NET managed code? It is fairly elementary to do through Silverlight's API, but what about simple JavaScript and .NET?

It takes a bit of cheating to get it done. First we need to look at how we launch a Modal Dialog Page. It is done in JavaScript. I am using a ribbon button to launch mine so the JavaScript is contained in the button's Custom Action elements file. Anyway, the modal dialog code involves creating calling the SP.UI.ModalDialog.showModalDialog method, and passing in the options that we want for the page. It looks like this:
var editOptions = SP.UI.$create_DialogOptions();
    editOptions.title = "Notify User";
    editOptions.url = "_layouts/SolutionFolder/Notify.aspx";
    editOptions.height = "600";
    editOptions.width = "500";
    editOptions.allowMaximize = "true";
    editOptions.showClose = "true";
    editOptions.args = args;
    editOptions.dialogReturnValueCallback = Function.createDelegate(null, CloseCallBack);
    SP.UI.ModalDialog.showModalDialog(editOptions);

Very straight forward. What we need to pay attention to here is the "args" option. This option allows us to pass objects from the originating script to the modal page. In my case, what I need is the ID if the SPList I am using and the individual IDs of the checked list items.

Fortunately, Microsoft has thought of that and we can obtain those very pieces of information straight away. We need only create a context and call two methods in the sp.js file. For the SPList ID, I call SP.ListOperation.Selection.getSelectedList() and for an array of the selected list item IDs I call SP.ListOperation.Selection.getSelectedItems(). Easy!
Next, I set those objects in to the options args object. It looks like this:
var context = SP.ClientContext.get_current();
var listId = SP.ListOperation.Selection.getSelectedList();
var items = SP.ListOperation.Selection.getSelectedItems();
var args = {
        listId: listId,
        items:  items
           };
This bit of code, of course, goes before you create the DialogOptions object.

Cool! Now I have my options, I launch my Notify.aspx page, and we are good! Not so fast! We still have to get the args object out of the JavaScript client world and in to the .NET server world. Now is where we get fancy.

In ASP.NET, how do we get information from the user on the client to the server managed code? We have some sort of a control that passes its user manipulated values to the back end via some sort of user action, like a button click. The same is true here. We create a generic "input" control on our page, and set the value of that control to be the args object. Yay!!

So, on our Modal Page we create a little bit of JavaScript. First, we call the ECMA Script method that will get the data we passed in the args object. We will then use JavaScript to set the value of our input control to be that of the args object.

First we need the input control:

Easy enough, right? Note the the runat is set to SERVER. This is very important. This control must be a SERVER control, otherwise we will not be able to get the value out. Microsoft has provided a pre-made method to get the args out, the SP.UI.ModalDialog.get_childDialog().get_args() method.
Now, inside a "script" tag on the modal page, we use the following JavaScript:
ExecuteOrDelayUntilScriptLoaded(function () {
            var args = SP.UI.ModalDialog.get_childDialog().get_args();
            document.getElementById('<%= args.ClientID %>').value = JSON.stringify(args);
        }, "sp.js")        

Notice here a couple of things. The entire bit of code is run in the ExecuteOrDelayUntilScriptLoaded delegate. That means that the entire page has to be loaded before you can star messing with any values. That means that the stuff we want from the SPList can not be gathered in the Page_Load method. You must wait until the page has been completely rendered!!!!
Second, you will notice the funky stuff in the document.getElementById function. This is because .NET will change the ID of all server controls. You need to know what the ID of the control is so that you can set the value, so... You have to call the managed code to get the ID. Yet another reason why you have to wait until the page is completely rendered before you can get to the args value.

This script, on the modal page, sets the value of the control. We are now able to get the args objects, but first we need to set up a couple of classes to format the data.

public class Args {
     public string ListId { get; set;}
     public System.Collections.Generic.List<Item> Items { get; set; }
}

public class Item {
     public int Id { get; set; }
}

Now we have some fun. We grab the args information then format it by using the System.Web.Script.Serialization.JavaScriptSerializer. It puts it in to a C# object format, integers for the integers and a string for the ID objects.

var javaScriptSerializer = new JavaScriptSerializer();
string json = args.Value;
var SelectedValues = javaScriptSerializer.Deserialize<Args>(json);

Now the SelectedValues object contains a string of the listID and a List of the item IDs. From these we can now plug in the values using a foreach loop in to the rest of the Notify code to send messages out to those items that were checked.

You can, of course, plug other simple objects in to the args object, just as long as there is some analogous type on the Managed side of the fence.

Tuesday, February 26, 2013

SPUser and Query-Based Distribution Lists

My current contract is with a company that decided after they installed Exchange 2003 that "if it ain't broke, don't fix it!!!!" So, they haven't updated Exchange in 10 years. It makes life harder in certain ECM and in Records Management situations, but for the most part it really doesn't affect me with my SharePoint work. Until the client wanted to get a task list created from every user in a very particular Distribution List...

First, what is a Query-Based Distribution List? In the Exchange 2003 world it is a security group that contains LDAP query objects, instead of the typical principal entities that normal groups contain. Now, it is a security group in name only. Because it does not contain actual entities, you can't use it to secure anything. You can only use it to email the users that are returned from the query or queries contained within. What makes it tricky is that the membership of the group is dynamic. The user list is created on the fly by the queries every time the group is called. Also, in Exchange 2003, there is no Exchange API that can be used to connect to these groups to resolve the members. Got all that? Good. Here we go!

First things first... In SharePoint if you are given a Security Group or a traditional Distribution List, how do you resolve the membership? Simple!! You need only call the SPUtility.GetPrincipalsInGroup method. That guy will return to you an array of SPPrincipalInfo objects which you can use to create SPUser objects in the manor of your choosing. My preference is to use the SPWeb.EnsureUser method. If the SPUser is not a member of the web or the group EnsureUser will add it and return the resolved SPUser object. If it IS a member it simply returns the SPUser object. It makes no difference if the group is a Distribution List or if the group is a Security Group. This method works for both.
public List<SPUser> GetUsersFromADGroup(string groupName, string groupDisplayName,  System.Collections.Generic.List<SPUser> masterList, SPWeb web) {
bool reachedMaxCount;
SPPrincipalInfo[] principalInfoArray = SPUtility.GetPrincipalsInGroup(web, groupName, int.MaxValue - 1, out reachedMaxCount);
if (principalInfoArray.Count() != 0) {
  foreach (SPPrincipalInfo info in principalInfoArray) {
    if (info.PrincipalType == SPPrincipalType.SecurityGroup || info.PrincipalType == SPPrincipalType.DistributionList) {
     GetUsersFromADGroup(info.LoginName, info.DisplayName,  masterList, web);
   } else {
      try {
       SPUser user = web.EnsureUser(info.LoginName);
        if (!masterList.Any(u => u.Name == user.Name)) {
         masterList.Add(user);
        }
     } catch {
        continue;
     }
   }
  }
  } else {
    GetUsersFromDynamicGroup(groupDisplayName, masterList, web);
  }
    return masterList;
}

A quick bit of code to show how you can get the SPRincipalInfo array using the SPUtility.GetPrincipalsInGroup method, then create SPUsers from that. The try/catch block is there in case there are any orphaned accounts sitting in the groups. These accounts can not be resolved, and will throw exceptions. You can choose to output these to an error list, or simply ignore them as I do here.


BUT, what about the Query-Based Distribution List(QBDL)? Since the QBDL has no actual members, remember the membership of the group is determined by LDAP query, the entities don't actually "belong" to the group, when you call the SPUtility.GetPrincipalsInGroup, you get a SPPrincipalInfo array with no objects. Bummer!

So, where do we go from here? We cannot directly get the group membership. So, have to come at this problem from a different angle. What if we had the LDAP query contained within the AD Query Object? If we had that, we could use .NET's System.DirectoryServices classes to execute it. Great!! So... where is that query held??
The one thing we know for sure about Microsoft Exchange is that they LOVE updating Active Directory Schema. They do it every time there is an update. Fortunately for us, anything that is added to the AD schema, we can very easily pull out and use.
First, you need to add a couple of references. You are going to need to reference System.DirectoryServices and System.DirectoryServices.AccountManagement. Add those guys to your using statements:

using System.DirectoryServices.AccountManagement;
using System.DirectoryServices;

After that we will be constructing the GetUsersFromDynamicGroup method that will be called should the SPPrincipalInfo array return with a count that is equal to 0. You can see the if up in the code, if (principalInfoArray.Count() != 0), and our method GetUsersFromDynamicGroup, being called in the "else" statement.

Now, what do we need to make everything happen. I first need the display name of the group. This is important, because the display name is how the methods in the System.DirectoryServices find the actual group. Why? Because that is how the object is named in LDAP. Note that the SPUtility.GetPrincipalsInGroup uses the login name of the group, NOT the display name. SPUtility.GetPrincipalsInGroup is looking for AD principals, NOT LDAP objects. These two concepts must be kept separate, or this process will not work (LDAP is looking for CN=GroupName,OU=OrgUnitName,DC=DomainName,DC=com, where a principal is looking for a name of DomainName\GroupLogInName).
After I have the display name, I am going to need all of the stuff to make a SPUser so I need the SPWeb object. Since I am returning everything as a List, I want to make sure that I am appending my SPUsers found in the QBDL to whatever else is in the group object, I include the existing List.

The first thing we are going to do in the method is set up a System.DirectoryServices.AccountManagement.PrincipalContext. Why? Well, I need to get a hold of the System.DirectoryServices.DirectoryEntry of the group. From that object, I can read what the members of the group are, and begin to resolve the users. Fortunately, getting the DirectoryEntry is very easy. I create a PrincipalContext using my domain name, then I create a GroupPrincipal using the PrincipalContext and the group display name. Now, it is just a matter of casting the return of the GroupPrincipal.GetUnderlyingObject method as a DirectoryEntry.
PrincipalContext principalContext = new PrincipalContext(ContextType.Domain, "DomainName");
GroupPrincipal groupPrincipal = GroupPrincipal.FindByIdentity(principalContext, IdentityType.Name, groupDisplayName);
DirectoryEntry group = (DirectoryEntry)groupPrincipal.GetUnderlyingObject();

After we have these objects ready to go, we are ready to start the heavy lifting. Now that we have the group as a DirectoryEntry, we can strip out the members of the group, in this case it will be the query objects. Since there really isn't a DirectoryEntry.Members property that will give us a nice DirectoryEntry.MembersCollection, we have to do that on our own. Luckily, we only need to cast the members property as IEnumerable. From there we can create our "foreach" loop and start our work.

object members = group.Invoke("Members", null);
foreach (object member in (IEnumerable)members) {
   //Do work in here
}

Here is where things get a little messy. We now have the Members as generic "objects." We really can't do anything with "objects." C# is a strongly typed language, so we need to transform this "object" in to something that has meaning. So, we need to create a new DirectoryEntry object. We then pass the member object to the DirectoryEntry constructor and, we automagically have a DirectoryEntry object from just a plain old "object." I have to admit here that I don't like using "objects." I like to strongly type everything so there is no confusion at design time or run time as to what objects are and how they can be used... But, because there is no MembersCollection object provided by the DirectoryEntry class, I couldn't figure out away to get the members and put them in a foreach loop. Sure, I could use some other loop, but... I am lazy and I didn't want to. You can make my code suck less and make your own cool loops. I didn't want to worry about it, so I punted and used "object."
Anyway, we have the member as a DirectoryEntry now. This member represents the query object that contains the LDAP query we need to actually resolve the users in the group. So, we can now finally call up the property that stores the LDAP query and execute it!! Yay!!

The property that concerns us is "msExchDynamicDLFilter." Essentially, all we need is that guy's value, and we are off to the next section of our code. Remember that all property objects are Dictionary objects. So you would retrieve the property value the exact same way you would any other dictionary value:
DirectoryEntry.Properties["msExchDynamicDLFilter"].Value.ToString().
Fun, right?

There is one other property that we need to get. Because we want to execute our LDAP query across the entirety of our domain forest, we want to grab the value of the msExchDynamicDLBaseDN property as well. With this guy we will construct the LDAP URI that we will use as our search base.

Now that we have our search base and our LDAP query we are ready to search AD for the members of the Dynamic group. .Net makes this very easy for us, because Microsoft has included a DirectorySearcher class that we can use to search AD. Intuitive, right? Actually it is:
  DirectoryEntry memberEntry = new DirectoryEntry(member);
  string ldapBase = memberEntry.Properties["msExchDynamicDLBaseDN"].Value.ToString();
  ldapBase = string.Format("LDAP://{0}", ldapBase);
  DirectoryEntry adRoot = new DirectoryEntry(ldapBase);
  DirectorySearcher search = new DirectorySearcher(adRoot, memberEntry.Properties["msExchDynamicDLFilter"].Value.ToString());
  SearchResultCollection results = search.FindAll();
 foreach (SearchResult result in results) {
//Create your SPUsers here
}

As you can see, we create the DirectorySearcher object using the the search base (adRoot), and the LDAP query (memberEntry.Properties["msExchDynamicDLFilter"].Value.ToString()). Microsoft provides us with a SearchResultCollection, nice of them, and all we need to do is call the FindAll method to populate it. There is also a FindOne method, if you are only looking for a single item. I'm looking for lots and lots, so I call FindAlll.
As with all of Microsoft's "collection" objects, the SearchResultsCollection inherits IEnumerable, so we can create a foreach loop using it.

Now it is just a matter of getting the property in the SearchResult object that contains the user's login name. After we have that, we need only call SPWeb.EnsureUser(loginName) and we are ready to add our SPUser object to the master List list. We do that the same way that we did it above:

 string loginName = string.Format("TRONOX\\{0}", result.Properties["samaccountname"].Value.ToString());
 try {
    SPUser user = web.EnsureUser(loginName);
    if (!masterList.Any(u => u.Name == user.Name)) {
      masterList.Add(user);
    }
} catch {
    continue;
}


You will notice that I do a little LINQ after calling EnsureUser and before I actually add the SPUser object to the List. I don't want any duplicates, so I check to see if there is a user already in the list with the same Name. If there isn't, I add it to the list. If there is, I ignore the object.

That is all there is to it!! It takes some getting around, but it is possible to get the membership of the QBDL!

Here are both of the methods that I was using in their entirety:

public List<SPUser> GetUsersFromADGroup(string groupName, string groupDisplayName,  System.Collections.Generic.List<SPUser> masterList, SPWeb web) {
 bool reachedMaxCount;
 SPPrincipalInfo[] principalInfoArray = SPUtility.GetPrincipalsInGroup(web, groupName, int.MaxValue - 1, out reachedMaxCount);
 if (principalInfoArray.Count() != 0) {
  foreach (SPPrincipalInfo info in principalInfoArray) {
    if (info.PrincipalType == SPPrincipalType.SecurityGroup || info.PrincipalType == SPPrincipalType.DistributionList) {
     GetUsersFromADGroup(info.LoginName, info.DisplayName,  masterList, web);
   } else {
      try {
       SPUser user = web.EnsureUser(info.LoginName);
        if (!masterList.Any(u => u.Name == user.Name)) {
         masterList.Add(user);
        }
     } catch {
        continue;
     }
   }
  }
  } else {
    GetUsersFromDynamicGroup(groupDisplayName, masterList, web);
  }
    return masterList;
}


public List<SPUser> GetUsersFromDynamicGroup(string groupDisplayName, List<SPUser> masterList, SPWeb web) {
 PrincipalContext principalContext = new PrincipalContext(ContextType.Domain, "TRONOX");
 GroupPrincipal groupPrincipal = GroupPrincipal.FindByIdentity(principalContext, IdentityType.Name, groupDisplayName);
 DirectoryEntry group = (DirectoryEntry)groupPrincipal.GetUnderlyingObject();
 object members = group.Invoke("Members", null);
  foreach (object member in (IEnumerable)members) {
    DirectoryEntry memberEntry = new DirectoryEntry(member);
    string ldapBase = memberEntry.Properties["msExchDynamicDLBaseDN"].Value.ToString();
    ldapBase = string.Format("LDAP://{0}", ldapBase);
    DirectoryEntry adRoot = new DirectoryEntry(ldapBase);
    DirectorySearcher search = new DirectorySearcher(adRoot, memberEntry.Properties["msExchDynamicDLFilter"].Value.ToString());
    SearchResultCollection results = search.FindAll();
    foreach (SearchResult result in results) {
       string loginName = string.Format("TRONOX\\{0}", result.Properties["samaccountname"].Value.ToString());
       try {
          SPUser user = web.EnsureUser(loginName);
             if (!masterList.Any(u => u.Name == user.Name)) {
                masterList.Add(user);
             }
       } catch {
           continue;
       }
    }
  }
 return masterList;
}

New Button on DispForm.aspx, Deploying With a Solution

I was asked by my client to create a button on the "View Task" form ribbon that the user could click and automatically complete the task. Sounds easy right? Well, if you are using SharePoint designer it is. SharePoint Designer has a straight forward and easy way to deploy new ribbon buttons, pretty much anywhere you want. They even have a nice step by step instruction manual on how to go about doing it:
Create a custom list form using SharePoint Designer

But what if you want to deploy that button to many different farms? From your development farm to your production farm as part of a feature deployment, perhaps? What do you do? Well, it gets a little more complex now. You need to find the correct ribbon to add the the button to. But, SharePoint uses the DispForm.aspx for... well... all of its forms, so what to do?

I am trying to make it harder than it seems. It is actually a piece of cake if you have done any ribbon manipulation at all. And if you haven't, well... maybe this would be an easy one to start on.

First, you need to know about how to make a generic button on a typical list page. Chris O'Brien does an EXCELLENT job of describing how to do that. Go to his blog for instructions.

Got a basic idea? Good. Now that you know how to make ribbon tabs and buttons, you basically know how to create and place buttons and tabs on virtually any page that SharePoint has. You just need to know the LOCATION!! The location is that pesky little property in the CommandUIDefinition. Most of the time you will be putting your button with the other buttons SharePoint has so your property will look very similar to this: Location="Ribbon.Tabs._children

But, let's circle back to the topic of this post. We don't want a new button on the list page we want it on the DISPLAY page, or DispForm.aspx. Turns out there is a location for that. Just follow all of Chris' instructions on how to add a button or tab, but change your CommandUIDefinition Location property to Ribbon.ListForm.Display.Manage.Controls._children

This is the ribbon location for what you see on the DispForm.aspx page. You can package up your button in a feature and deploy to where ever you would like it. No need to change the form page in every farm!

Tuesday, February 5, 2013

The User Task List Web Part vs The Content Query Web Part

I have a requirement for a task roll up web part. I have an Event Receiver that spawns a new task list for every list item in another list. The receiver looks in to a People Picker Field (SPFieldUserMulti)and creates a new task for all of the users in the field.

We wanted a web part that would only show the user's active tasks to place on the site home page so that the user could seamlessly find the tasks assigned to them, without having to transverse a bunch of task lists. So how to do this? There are a couple of different ways. If you are looking for a quick Out Of the Box (OOB) way, you need only drop the User Task List Web Part (UTLWP), from the Social Collaboration Web Part Group, on your page and you are done!! The User Task List will look in to the site and show all of the tasks, that are not marked as complete, in a list. Very easy and very quick.

The other way to do things is almost as easy, you drop a Content Query Web Part (CQWP), from the Content Roll Up Web Part Group, on your page. You set the query to list Type Tasks, and then configure the Additional Filters to look at the Assigned To site column with a setting of equal to [Me]. The next filter should be the Task Status site column, and set the configuration to not equal "Completed."

Now we have two web parts that will show the tasks assigned to the user throughout the site, all in a matter of just a few moments! Very cool!

So... What if we want more? What if we just want the tasks assigned to the user, but we want them to be current. In other words, we want the task to be assigned to the user, have a task status set to anything but Completed, and the due date must be greater than today.
Now the ease of the User Task List Web Part falls away. For some reason, the makers of the web part didn't add a way to easily change the configuration. It is not possible to change the internal query of the UTLWP to add the extra filter of Due Date. Bummer.

However, the CQWP can be very quickly configured to make this extra change happen, we simply add another filter to our query that sets the Due Date to be greater than or equal to [Today] in the Additional Filters section. Blamo, that easy.

Now, what if we have task lists in other sites within the Site Collection? What if we want to aggregate these tasks to the page in the root site?

The CQWP base query uses the SPSiteDataQuery object to search through all sites in the site collection for those items that meet the search criteria. So, it AUTOMATICALLY will aggregate the sites and display the items that meed the query criteria. Dead easy, right?

The UTLWP will only show tasks in the current site. Big bummer. BUT, you can change the base query from the standard site query to a SPSiteDataQuery by doing a few minor steps. First you drop the UTLWP on to your page. Then click the down arrow on the upper right hand of the web part. Select "Export." This will download the web part on to your local computer as a .dwp file. This is just an XML file of the web part's settings.
Open the DWP file in a text editor, preferably one that will color code and format the text as XML. This is not a requirement, but it really does make it easier to read.
At the bottom of the file just above the closing </WebPart> tag put in this tag:
<QuerySiteCollection xmlns="http://schemas.microsoft.com/WebPart/v2/Aggregation">true</QuerySiteCollection>
Save the file and go back to your SharePoint page. In the Web Part selector ribbon, there is a link to upload a web part. Simply click on this and upload the dwp file you just manipulated. Then just drop it on your page.
BINGO!! You have a UTLWP that will show all the uncompleted tasks for the user in all sites in the site collection.

So, in the battle between these two very useful web parts, who wins? Of course it all depends on your business needs. My particular business needs point me towards the more flexible CQWP, because I need to show the tasks for the user that are not complete, and not overdue. It also allows me to add another CQWP on the page to show the user the tasks that are not complete and ARE overdue. The UTLWP, while very useful, does not allow me to do that.