As I started my work to migrate files from file shares to SharePoint, I realized that I needed to have some sort of scan of the existing structure and security as it is. I needed this to be friendly enough to hand over to regular users so that they could have an idea of what kind of work needed to be done for migration preparation.
Since users are familiar with the tree hierarchy I wanted to be able to output something to them that would maintain this look and feel. I also wanted them to be able to hide sections of the output. If a large part of the structure is just inheriting permissions, doesn't apply, have been dealt with, or whatever, I wanted them to be able to shut that section down.
PowerShell doesn't really have a good output for tree views. Sure, tree is around, but it wouldn't serve my needs. Maybe I just don't know it well enough yet...
I didn't want an Excel output, because it really doesn't have a great tree view. Again, maybe I just don't know the product well enough.
I chose to create a simple HTML file as the output and use some simple CSS and JavaScript to get the functionality I wanted. One of the cleanest tree views I have seen comes from W3Schools.com. They have a great site for this kind of thing and their tree view was exactly what I was looking for.
Once I figured out how to layout the HTML and how the nested unorganized lists would work, the whole thing came together pretty quickly. PowerShell has some out of the box cmdlets that do the heavy lifting. I made use of the normal Get-Item and Get-ChildItem cmdlets to get collections of the folder objects, then simply called the Get-Acl cmdlet to get the security of the folder.
The ACL return object of the Get-Acl cmdlet has a handy property, isinherited, that will let you know if the folder inherits security or has custom security.
The only tricky part of all of this was getting the HTML set up. The nesting works by having the child items contained in an unorganized list with a "nested" class designation enclosed within the parent's line item. The parent line item is surrounded by a caret class to attach the JavaScript click functionality.
In deep folder taxonomies, the list nesting and closing can get pretty complex. I solved this by using three functions... One that set the parent folder HTML structure, one that closed the parent HTML structure, and one that created the childless folder structure.
When all of the shouting was over, I had a reasonably elegant script that determined if a folder had child items. If it did, it was designated a parent item and the parent HTML was written. The child folder collection was then sent to a loop that simply called on the function that determined if it was a parent, and so on.
Ramblings of a Mad Computer Guy
Monday, November 19, 2018
Monday, October 29, 2018
SharePoint File Migrations - What the Hell am I Getting Myself In To
The File Migration is on in a big way in my company. All file shares are to be migrated to SharePoint by 12/31/2019.
My company's organisation has Corporate sitting at the top as, essentially, a holding company. Our divisions, work more like subsidiaries with their IT staff reporting to their local leadership, with no connection to Corporate. Local does not report to Corporate. I work for Corporate.
The majority of our IT staff is purely break/fix. The divisions are mostly in rural locations. IT is an afterthought at best. The IT staff comes down to someone who works at the division's child is "good" at computers. What does that mean for me... Well... To put it bluntly, a lack of professionalism.
Documentation is non-existent. The ability to write documentation is... questionable. Most of these people haven't written anything of significance since High School. Yet, one of the requirements for the project is that they write a design document that will detail their SharePoint site designs with respect to their internal departments. They also need to write up a security model.
I refuse to accept that the IT staff is incapable of doing this project. Difficult, yes. Outside their wheelhouse, yes. The trouble I see is that they just won't want to do it. The project will be labor intensive, and I think it will keep getting pushed to the side.
I'll need away to motivate the staff to complete the project, but how do you motivate a staff that doesn't report to you, doesn't like you, and doesn't particularly feel the need to listen? On top of that, they are geographically disparate. I can't throw an end of project pizza party. I can't recommend bonus money. I can't even send out McDonald's gift certificates. One, because most of the towns don't have a McDonald's, and two, company policy prohibits it.
I feel like I'm between a rock and a hard place, and I really should be looking for the exit... But I'm getting off on the challenge. If I pull it off, it will be a crowning achievement for me. Sure it won't look very fancy on a resume, but it will be a diplomatic and technical success... If I can pull it off.
My company's organisation has Corporate sitting at the top as, essentially, a holding company. Our divisions, work more like subsidiaries with their IT staff reporting to their local leadership, with no connection to Corporate. Local does not report to Corporate. I work for Corporate.
The majority of our IT staff is purely break/fix. The divisions are mostly in rural locations. IT is an afterthought at best. The IT staff comes down to someone who works at the division's child is "good" at computers. What does that mean for me... Well... To put it bluntly, a lack of professionalism.
Documentation is non-existent. The ability to write documentation is... questionable. Most of these people haven't written anything of significance since High School. Yet, one of the requirements for the project is that they write a design document that will detail their SharePoint site designs with respect to their internal departments. They also need to write up a security model.
I refuse to accept that the IT staff is incapable of doing this project. Difficult, yes. Outside their wheelhouse, yes. The trouble I see is that they just won't want to do it. The project will be labor intensive, and I think it will keep getting pushed to the side.
I'll need away to motivate the staff to complete the project, but how do you motivate a staff that doesn't report to you, doesn't like you, and doesn't particularly feel the need to listen? On top of that, they are geographically disparate. I can't throw an end of project pizza party. I can't recommend bonus money. I can't even send out McDonald's gift certificates. One, because most of the towns don't have a McDonald's, and two, company policy prohibits it.
I feel like I'm between a rock and a hard place, and I really should be looking for the exit... But I'm getting off on the challenge. If I pull it off, it will be a crowning achievement for me. Sure it won't look very fancy on a resume, but it will be a diplomatic and technical success... If I can pull it off.
Thursday, July 5, 2018
SharePoint File Share Migrations - User Suck
More of a rant than anything useful...
As an IT professional, when considering any type of migration, you need to have an overriding benefit for that migration. Fundamentally, is the Juice worth the Squeeze?
When you migrate files to SP, you need to do a few things to ensure that the Juice IS worth the Squeeze.
What is the first and most fundamental need of users in file share or SP? Retrieve-ability . Users need to be able to find what they put in, quickly and easily.
With efficient use of Metadata, SharePoint makes retrieve-ability much more efficient than a file share. However that comes at a cost. The user must fill in that metadata. Users HATE filling in metadata.
When it comes to users, they love unmanaged data. A user would rather throw their documents/data in to a single spot and bitch about not being able to find anything, than filling in a few columns of data.
Even more than bitching about not being able to find anything, users love folders. Users would rather save a copies of the same document to 50 different folders, than save a single document and add a couple of columns of data.
It all comes down to basic human nature. People have a very hard time with delayed gratification. The getting a delayed benefit of retrieve-ability against filling in a few columns of data NOW is not seen as something that is practical. Even when shown actual real-time data showing that the time it takes to retrieve a single document using metadata is 2 to 3 times faster than navigating even a known folder structure, filling in metadata is seen as a waste of time.
BUT what is the overall reason you are migrating to SharePoint? Most of the time it is because the decision makers see that data showing time savings of 2 to 3 times and WANT that productivity increase. Along with other added benefits such as in place records retention, easier collaboration and file sharing.
At primary issue, however is that users rarely WANT to do any type of migration. They are typically TOLD to do the migration from management. Most often, users must do the migration work in addition to their their normal workload. This means that no one wants to do the migration, it interferes with their typical day, and the migration isn't just a little bit of work. It is a LOT of work. Migration software can only go so far, user interaction is required at some point.
So you end up in this loop:
As an IT professional, when considering any type of migration, you need to have an overriding benefit for that migration. Fundamentally, is the Juice worth the Squeeze?
When you migrate files to SP, you need to do a few things to ensure that the Juice IS worth the Squeeze.
What is the first and most fundamental need of users in file share or SP? Retrieve-ability . Users need to be able to find what they put in, quickly and easily.
With efficient use of Metadata, SharePoint makes retrieve-ability much more efficient than a file share. However that comes at a cost. The user must fill in that metadata. Users HATE filling in metadata.
When it comes to users, they love unmanaged data. A user would rather throw their documents/data in to a single spot and bitch about not being able to find anything, than filling in a few columns of data.
Even more than bitching about not being able to find anything, users love folders. Users would rather save a copies of the same document to 50 different folders, than save a single document and add a couple of columns of data.
It all comes down to basic human nature. People have a very hard time with delayed gratification. The getting a delayed benefit of retrieve-ability against filling in a few columns of data NOW is not seen as something that is practical. Even when shown actual real-time data showing that the time it takes to retrieve a single document using metadata is 2 to 3 times faster than navigating even a known folder structure, filling in metadata is seen as a waste of time.
BUT what is the overall reason you are migrating to SharePoint? Most of the time it is because the decision makers see that data showing time savings of 2 to 3 times and WANT that productivity increase. Along with other added benefits such as in place records retention, easier collaboration and file sharing.
At primary issue, however is that users rarely WANT to do any type of migration. They are typically TOLD to do the migration from management. Most often, users must do the migration work in addition to their their normal workload. This means that no one wants to do the migration, it interferes with their typical day, and the migration isn't just a little bit of work. It is a LOT of work. Migration software can only go so far, user interaction is required at some point.
So you end up in this loop:
- Users are forced to examine their folder structures and come up with a site design with libraries and security models for their documents.
- Metadata is agreed upon and added as content types to libraries.
- Pilot migration occurs, users realize the amount of work needed for migration.
- Users push back on metadata, demanding that the folder structure just be picked up and moved in to SP.
- SharePoint Team pushes back, because they know that moving from one unmanaged data storage system to another unmanaged data storage system will do no one any good, and will likely be worse than when they started.
- Management sides with users, because, they don't know any better.
- SP Team does what they can by breaking up folder structure as best they can based on user need in to sites/libraries.
- Post migration users complain loudly about not being able to find anything.
- Management comes to SP to find out why users can't find anything.
- SP team responds to management by saying they should have used metadata
- Management creates project to look in to metadata usage.
- Repeat at step 1.
This whole rinse/repeat process is why SharePoint consulting is so much better than SharePoint salary work. Getting paid by the hour means that however many cycles you want to roll through, it all pays the same, and I'm guaranteed income for a very long time. As a salaried company person, you get frustrated at the repeat cycle of user bull crap. You get sick of the constant battling with users over the cost/benefit of metadata over folders. You get tired of spending hours of your time spinning you wheels, doing it wrong KNOWING you will need to redo the work, then having to swallow your resentment as the same users come back to you saying that SharePoint sucks, and you KNOWING that it is the unmanaged data that sucks.
I imagine it is a lot like an internist that spends a lot of time treating a patient who's root cause of all of their medical issues is their obesity. They spend time and effort coming up with diet plans and arranging nutritionists and other professionals to help this person out, but having the patient just go out and eat the same crap they have been eating in the past. Eventually that doctor just says, screw that guy. That is what your SharePoint people eventually say, and they leave to become consultants.
Rant over. Time to meet with users about how to migrate their folders to metadata... Again.
Friday, September 11, 2015
Moving a File to Another Site Collection, Using REST, High Trust App Only Permissions, AND in a Console Program - SharePoint 2013 On Premises
I know, I know, I'm an over achiever. Here is the deal, I needed to move a bunch of files from one site collection to another. Not a big deal, a pretty elementary task that I have done many times using the Client Side Object Model. However, I knew that in the future I would need something similar to move files from a SharePoint hosted app to a Records Center site collection via Workflow or some other such method. Since I like the REST API... a lot... I decided to take a whack at moving the file that way.
BUT, since I knew that I would need to create web services that would interact with SharePoint via REST or CSOM, as add-ins (Microsoft changed the name form Apps...) I wanted to use this authentication model for this task.
Here are our requirements for this program:
The Client Secret I always click generate on too, just for kicks. It is required for the form, but we don't need it. The Title should be something descriptive for the accessing program, but is arbitrary. I used "File Mover Program." The App Domain again is needed for the form but not for our purposes. I used the normal url for my Provider Hosted Add-Ins, apps.mydomain.com. You can leave Redirect URL blank. Make sure you have note of the Client ID!!!!! Click Create.
Off we go...
For the most part it looks just like the post we did before. Since this part of the program runs almost immediately after the file upload, one of the major advantages for using the REST API is that it is very fast, we can re-use our Request Digest token.
The headers that you should be aware of are the X-HTTP-Method and IF-MATCH headers. Our Request Method is POST, because that is what we are doing, posting data to the server, however this is an update to an existing list item, so we need to let SharePoint know. That is where the X-HTTP-Method comes in. Many firewalls block anything other than GET or POST via HTTP traffic. So we use this header for updates and deletes.
The IF-MATCH header makes the POST conditional. Check here for an explanation. Because we are saying this is an update to an existing entity, we want to ensure that if there isn't a matching entity, the POST will fail.
Finally we come to the payload string. This is the JSON representation of the update object.
We first specify the type in the __metadata object, next we specify the column name to be updated, then the value of that column. We put a length header to ensure proper formatting and security, then create a Stream to send the JSON home.
Execution happens with the GetResponse method.
THAT'S IT!! Kind of a lot to go through, but a good bit of code to have. I will be refactoring this in to a REST service to use with moving files in workflows.
Thanks to Andrew Connell. He initially showed me the way with his GitHub post on how to connect to Office 365. I have taken several classes with Andrew, and he is very free with answers to questions.
BUT, since I knew that I would need to create web services that would interact with SharePoint via REST or CSOM, as add-ins (Microsoft changed the name form Apps...) I wanted to use this authentication model for this task.
Program
With any program you need to start with requirements. What do we want to do, and what limitations do we have. I discussed some above.
Here are our requirements for this program:
- Move a file from a library to another library in a different site collection
- Move the file's metadata, and add it to the fields of the new library
- Title
- Authenticate as a High Trust App
- Use REST API
- Program must be external to SharePoint
- Console Application
Fun, right?
SharePoint Side
First things first. I'm going to assume that you have Apps and High Trust Add-Ins already configured. If you don't, you need to do that first. I am also going to assume you know what the difference is between ACS and high trust Add-Ins. You have to know the difference or your stuff won't work. This is specifically for High Trust Add-Ins on an On Premises SharePoint 2013 deployment. If you are attempting to do this to a SharePoint Online, you can, but you will need to change the way you register and authenticate your add-in.
Register Add-In
We start by registering an add-in. This tells SharePoint that an Add-In exists and gives it an Add-In identity. We will use this identity when we give permissions to the add-in and we use it to create access tokens in our program. Again, this is a process that is well documented.
All this process does is to register the add-in with the configuration database for this web application. The web you do this in really doesn't matter. Once an app is registered, the Client ID can be used to grant permissions anywhere in the Web Application.
What you need to focus on is when you click "generate" for your Client ID, make a note of it so that you can use it later on.
The Client Secret I always click generate on too, just for kicks. It is required for the form, but we don't need it. The Title should be something descriptive for the accessing program, but is arbitrary. I used "File Mover Program." The App Domain again is needed for the form but not for our purposes. I used the normal url for my Provider Hosted Add-Ins, apps.mydomain.com. You can leave Redirect URL blank. Make sure you have note of the Client ID!!!!! Click Create.
Give Add-In Permissions
Now is the time on Sprockets where we give add-ins permission. I just dated myself... but I digress...
Permissions are granted in two places, the source site needs READ permission, and the target site needs WRITE permission. Because we are looking to do this in two separate site collections, it is best to grant access at the source and target web level. That ensures that our add-in doesn't have more permissions that required. This process MUST be done in each web that your add-in will be accessing, so that the web knows that the add-in has permission.
This is, yet again, a well documented process. What is important here is that we add AllowAppOnlyPolicy="true" to our AppPermissionRequest node. This tells the web that this add-in can work without having a user attached to it or, "Add-In only" permissions. The kicker here is that we can't send a user with our access token request. More on that later.
In this case we grant the appropriate permissions as required.
Console Program
Now that we have our Add-In registered and granted permissions, we are ready to code!!!
I use C# to write my programs in. I'm a .NET guy and that is what I do. You could, at this point, write the program in whatever language you want, but I don't want to. Because I am using C#, I get to use a very handy class that Microsoft has created for .NET developers. The TokenHelper class.
Set up Project
Start by firing up Visual Studio (I'm using Visual Studio 2015) and creating a new C#, Windows, Classic Desktop project using the good old fashioned Console Application template. Yay!! Console Application programs ROCK!!
That is going to set up everything and create the handy dandy Program.cs class with the all familiar Main method. Since I don't care about you judging me about my spaghetti code and not using OOP practices, everything goes in that method! The good news is that it is only about 70 lines at most.
References
Next thing we do is set up the references that we are going to need. Most of these are going to be used by the TokenHelper class, and not by our actual program. Our program works almost completely with the references created Out Of the Box. Because it is created to work in just about every situation, the TokenHelper needs lots of references. Which brings us to the obvious question of, how do I get the TokenHelper class?
Well.. It comes with the Visual Studio SharePoint App templates. So... If you don't have one handy, you can create an empty Provider Hosted add-in and copy the code in the TokenHelper. Then create a class in our Console Application called TokenHelper and paste the code in to that, or pick your favorite way to steal code and do that. Whatever floats your boat.
The problem now is that you have a bunch of code that references assemblies that you don't have referenced in your program. So reference the following: Microsoft.Identity.Model, Microsoft.Identity.Model.Extensions, Microsoft.SharePoint.Client, and Microsoft.SharePoint.Runtime. If I didn't get them all here, Visual Studio will yell at you and you can find out what you need by looking at the referenced assemblies in the provider hosted app as well as what you have in your program app.
We are going to be making some HTTP calls and processing the responses so you are going to need to reference System.Web and System.Web.Extensions as well.
We want to use JSON as our data type, so use Nuget and download Newtonsoft.JSON as well.
App.config Configuration
Now we are ready to start... sort of... in your App.cofig file create a node called appSettings there we need to add the information the the TokenHelper uses to create the access tokens. we need four key elements, ClientId, the ClientId that we got when we registered the app, the ClientSigningCertificationPath, the path to the certificate that you used to create the High Trust and the IssuerId for your High Trust environment.
These things are all very important, because if any of them are incorrect, you will not get an access token.
Program Code
Finally ready to code! The code is very straight forward.
GET
1: const string targetSiteCollectionUrl = "http://spalnnodom";
2: const string sourceSiteCollectionUrl = "http://spalnnodom/sites/contentorg2/";
3: Uri sourceSiteUri = new Uri(sourceSiteCollectionUrl);
4: Uri targetSiteUri = new Uri(targetSiteCollectionUrl);
5: //Get the access token for the URL.
6: // Requires this app to be registered with the tenant. Null user identity aquires app only token. App must be registered for app only permissions.
7: // For app + user credential is required
9: string sourceAccessToken = TokenHelper.GetS2SAccessTokenWithWindowsIdentity(sourceSiteUri, null);
10: string targetAccessToken = TokenHelper.GetS2SAccessTokenWithWindowsIdentity(targetSiteUri, null);
11: //Get source file as stream
12: string urlSourceFile = "http://spalnnodom/sites/contentorg2/_api/Web/GetFileByServerRelativeUrl('/sites/contentorg2/Records/SummaryEmails.docx')/$value";
13: HttpWebRequest fileRequest = (HttpWebRequest)HttpWebRequest.Create(urlSourceFile);
14: fileRequest.Method = "GET";
16: fileRequest.Accept = "application/json;odata=verbose";
17: fileRequest.Headers.Add("Authorization", "Bearer " + sourceAccessToken);
18: WebResponse webResponse = fileRequest.GetResponse();
19: Stream fileResponseStream = webResponse.GetResponseStream();
We begin with adding two constants, our target and our source URLs. We convert these to URIs and call the the TokenHelper.GetS2SAccessTokenWithWindowsIdentity method.
"NOW, JUST HOLD ON A DAMN MINUTE!!" you say "You said earlier that we were doing add-in only permissions. Why in blue blazes are you calling a method with Windows Identity????"
You are very smart, you get a cookie. This is one of those methods that the developer didn't think very hard when he wrote it, or maybe MSFT never wanted to show the world this method or... I don't know... Anyway, the method is called GetS2SAccessTokenWithWindowsIdentity, however it is designed to be used weather you need add-in only access tokens or any other type of S2S access token. How we get add-in only permissions is to pass in "null" as the windows user principal. Goofy right? That is the first "gotcha" of this process.
I create two access tokens, one for the source web and one for the target web. Because, I have given the app different permissions in each web, I need two different acc
Next in the code, we format the URL that we will use in our HttpWebRequest to contact the SharePoint RESTful API. Because we are looking for a file, we don't go to the list, we go to where files are stored according to SharePoint, the web.
Now, if you are an old-timer, like me, this makes perfect sense. When we did development in the SharePoint Server Model, you got your SPFile objects from the SPWeb object and go to the "folder" that the file resides in, no muss no fuss.
However, if you are new to SharePoint, this URL bakes your noodle. You look for the file in the web object, but you pass in the relative path that includes the library. This all comes from the long and sorted history of SharePoint. Just roll with it. If you really want to conceptualize it, when you work with SharePoint files, think of the web as the root folder in a file structure, libraries are the next folder, and any SPFolder that you have in your library is the next folder in the hierarchy. Since I don't have any other folders in my library, I use the root folder, the library URL.
One very important part of this URL is the very last part "/$value". This part tells the SharePoint API to return the actual binary data of the file rather than the SPFile object.
The rest is what makes up a REST call to SharePoint 2013. What should draw your attention is line 17. Here is where we pass the OAuth token obtained from the TokenHelper class. This is what will tell SharePoint that we are making the request as a registered app that is fully trusted.
After that, we use a Stream object to prepare the file binary to be moved to the new library.
Again, we see the creation of a HttpWebRequest. This we send to the TARGET web site to the special contextinfo action. This action specifically returns the Request Digest token. It is very similar to the GET we did earlier, the only difference is that we have a ContentLength of 0. This is important. You MUST have a ContentLength of 0 or you will get an error.
The only other interesting part of this is that we parse the response using the Newtonsoft.Json classes. We didn't do this with the file because that came to us as an octet stream rather than a JSON object.
Pretty anticlimactic... The only interesting thing here is that I actually do use the library to get the Root Folder, then use the Root Folder.Files.Add method to upload the file.
A gotcha that might getcha here is that you need to specify the MIME type of the document as the content type of the payload, and we must add an extra header of binaryStringRequestBody set to true, to tell the REST API that the the request payload is a binary stream, not a string.
Next you see the X-RequestDigest header that is set to the Request Digest string that we obtained earlier.
Finally, we use the HttpRequest GetReqestStream method with the Stream CopyTo method to upload our file using a stream rather than a bit array. This should allow us to upload large files.
Then we get our uploadResponse that will come back as JSON representation of the SPFile object. This is a good thing, because we will use that to get the list item that is associated with the file that we just uploaded. We use that to update the file metadata.
This GET is the same as the GET before. No need to go in to very far. There are a couple of things to point out, though. Take a look at the itemAllFieldsUri, listItemUri, and listItemDataType variables.
These variables show how you move through the JObjects in a JSON respnose using the Newtonsoft.Json classes. In these cases I knew exactly what JSON values I wanted to use, and I navigated to them.
With this GET, we now have the list item URI associated with the file, and the list item data type. We are ready to post our title change.
"NOW, JUST HOLD ON A DAMN MINUTE!!" you say "You said earlier that we were doing add-in only permissions. Why in blue blazes are you calling a method with Windows Identity????"
You are very smart, you get a cookie. This is one of those methods that the developer didn't think very hard when he wrote it, or maybe MSFT never wanted to show the world this method or... I don't know... Anyway, the method is called GetS2SAccessTokenWithWindowsIdentity, however it is designed to be used weather you need add-in only access tokens or any other type of S2S access token. How we get add-in only permissions is to pass in "null" as the windows user principal. Goofy right? That is the first "gotcha" of this process.
I create two access tokens, one for the source web and one for the target web. Because, I have given the app different permissions in each web, I need two different acc
Next in the code, we format the URL that we will use in our HttpWebRequest to contact the SharePoint RESTful API. Because we are looking for a file, we don't go to the list, we go to where files are stored according to SharePoint, the web.
Now, if you are an old-timer, like me, this makes perfect sense. When we did development in the SharePoint Server Model, you got your SPFile objects from the SPWeb object and go to the "folder" that the file resides in, no muss no fuss.
However, if you are new to SharePoint, this URL bakes your noodle. You look for the file in the web object, but you pass in the relative path that includes the library. This all comes from the long and sorted history of SharePoint. Just roll with it. If you really want to conceptualize it, when you work with SharePoint files, think of the web as the root folder in a file structure, libraries are the next folder, and any SPFolder that you have in your library is the next folder in the hierarchy. Since I don't have any other folders in my library, I use the root folder, the library URL.
One very important part of this URL is the very last part "/$value". This part tells the SharePoint API to return the actual binary data of the file rather than the SPFile object.
The rest is what makes up a REST call to SharePoint 2013. What should draw your attention is line 17. Here is where we pass the OAuth token obtained from the TokenHelper class. This is what will tell SharePoint that we are making the request as a registered app that is fully trusted.
After that, we use a Stream object to prepare the file binary to be moved to the new library.
Request Digest
Until now things have been easy. Now we get tricky. For a POST, SharePoint requires that we send two types of authentication. One, we have already the AccessToken. However, we need to use that access token to get a Request Digest token. The Request Digest token is a client side token that is used to validate the client and prevent malicious attacks. The token is unique to a user and a site and is only valid for a (configurable) limited time.
1: //Obtain FormDigest for upload POST
2: HttpWebRequest digestRequest = (HttpWebRequest)HttpWebRequest.Create("http://spalnnodom/_api/contextinfo");
3: digestRequest.Method = "POST";
4: digestRequest.Accept = "application/json;odata=verbose";
5: //Authentication
6: digestRequest.Headers.Add("Authorization", "Bearer " + targetAccessToken);
7: //ContentLength must be "0" for FormDigest Request
8: digestRequest.ContentLength = 0;
9: HttpWebResponse digestResponse = (HttpWebResponse)digestRequest.GetResponse();
10: Stream webStream = digestResponse.GetResponseStream();
11://Deseralize JSON object in the Response object. Uses Newtonsoft.Json.Net Nuget package
12: StreamReader responseReader = new StreamReader(webStream);
13: string newFormDigest = string.Empty;
14: string response = responseReader.ReadToEnd();
15: var j = JObject.Parse(response);
16: var jObj = (JObject)JsonConvert.DeserializeObject(response);
17: foreach (var item in jObj["d"].Children()) {
18: newFormDigest = item.First()["FormDigestValue"].ToString();
19: }
20: responseReader.Close();
The only other interesting part of this is that we parse the response using the Newtonsoft.Json classes. We didn't do this with the file because that came to us as an octet stream rather than a JSON object.
POST
Now we are finally ready to upload our file in to the target library.
1: //Upload file
2: string urlTargetFolder = "http://spalnnodom/_api/web/lists/getbytitle('Documents')/RootFolder/Files/add(url='MovedFileNameSCMove.docx',overwrite='true')";
3: HttpWebRequest uploadFile = (HttpWebRequest)HttpWebRequest.Create(urlTargetFolder);
4: uploadFile.Method = "POST";
5: uploadFile.Accept = "application/json;odata=verbose";
6: //The content type must match the MIME type of the document
7: uploadFile.ContentType = "application/octet-stream";
8: uploadFile.Headers.Add("Authorization", "Bearer " + targetAccessToken);
9: uploadFile.Headers.Add("binaryStringRequestBody", "true");
10: uploadFile.Headers.Add("X-RequestDigest", newFormDigest);
11: Stream uploadStream = uploadFile.GetRequestStream();
12: fileResponseStream.CopyTo(uploadStream);
13: WebResponse uploadResponse = uploadFile.GetResponse();
Pretty anticlimactic... The only interesting thing here is that I actually do use the library to get the Root Folder, then use the Root Folder.Files.Add method to upload the file.
A gotcha that might getcha here is that you need to specify the MIME type of the document as the content type of the payload, and we must add an extra header of binaryStringRequestBody set to true, to tell the REST API that the the request payload is a binary stream, not a string.
Next you see the X-RequestDigest header that is set to the Request Digest string that we obtained earlier.
Finally, we use the HttpRequest GetReqestStream method with the Stream CopyTo method to upload our file using a stream rather than a bit array. This should allow us to upload large files.
Then we get our uploadResponse that will come back as JSON representation of the SPFile object. This is a good thing, because we will use that to get the list item that is associated with the file that we just uploaded. We use that to update the file metadata.
Get the List Item
Getting the list item requires another REST call. First we parse the data in the uploadRespose to find the ListItemAllFields URI property. That will give us, among other things the URI of the list item, as well as the list item data type, something we will need when we do the POST that updates the list item.
1: //Get list item
2: //First get ListItemAllFields property from the response
3: Stream getItemAllFieldsStream = uploadResponse.GetResponseStream();
4: StreamReader getItemAllFieldsReader = new StreamReader(getItemAllFieldsStream);
5: string itemAllFieldsUri = string.Empty;
6: string itemAllFieldsResponse = getItemAllFieldsReader.ReadToEnd();
7: var iAllFields = JObject.Parse(itemAllFieldsResponse);
8: itemAllFieldsUri = iAllFields["d"]["ListItemAllFields"]["__deferred"]["uri"].ToString();
9: //Get list item URI from response
10: HttpWebRequest getListItemRequest = (HttpWebRequest)HttpWebRequest.Create(itemAllFieldsUri);
11: getListItemRequest.Method = "GET";
12: getListItemRequest.Accept = "application/json;odata=verbose";
13: getListItemRequest.Headers.Add("Authorization", "Bearer " + targetAccessToken);
14: WebResponse getListItemWebResponse = getListItemRequest.GetResponse();
15: Stream getListItemResponseStream = getListItemWebResponse.GetResponseStream();
16: StreamReader getListItemStreamReader = new StreamReader(getListItemResponseStream);
17: string getListItemAllProperties = getListItemStreamReader.ReadToEnd();
18: var getListItemJObject = JObject.Parse(getListItemAllProperties);
19: string listItemUri = getListItemJObject["d"]["__metadata"]["uri"].ToString();
20: string listItemDataType = getListItemJObject["d"]["__metadata"]["type"].ToString();
These variables show how you move through the JObjects in a JSON respnose using the Newtonsoft.Json classes. In these cases I knew exactly what JSON values I wanted to use, and I navigated to them.
With this GET, we now have the list item URI associated with the file, and the list item data type. We are ready to post our title change.
List Item MERGE
Since we added a file to the library, we get a list item for free. We have the URI of the list item, so we know we can create a REST call to update the metadata. A required piece of the REST call to update list items is the list item data type. We got this piece of data from the last GET so we are good to go for the final piece of the program:1: //Update title Field
2: HttpWebRequest updateTitleRequest = (HttpWebRequest)HttpWebRequest.Create(listItemUri);
3: updateTitleRequest.Method = "POST";
4: updateTitleRequest.Accept = "application/json;odata=verbose";
5: updateTitleRequest.ContentType = "application/json;odata=verbose";
6: updateTitleRequest.Headers.Add("Authorization", "Bearer " + targetAccessToken);
7: updateTitleRequest.Headers.Add("X-RequestDigest", newFormDigest);
8: updateTitleRequest.Headers.Add("X-HTTP-Method", "MERGE");
9: updateTitleRequest.Headers.Add("IF-MATCH", "*");
10: string payload = "{'__metadata':{'type':'" + listItemDataType + "'}, 'Title': 'Changed with REST!!'}";
11: updateTitleRequest.ContentLength = payload.Length;
12: StreamWriter updateItemWriter = new StreamWriter(updateTitleRequest.GetRequestStream());
13: updateItemWriter.Write(payload);
14: updateItemWriter.Flush();
15: WebResponse updateTitleResponse = updateTitleRequest.GetResponse();
Off we go...
For the most part it looks just like the post we did before. Since this part of the program runs almost immediately after the file upload, one of the major advantages for using the REST API is that it is very fast, we can re-use our Request Digest token.
The headers that you should be aware of are the X-HTTP-Method and IF-MATCH headers. Our Request Method is POST, because that is what we are doing, posting data to the server, however this is an update to an existing list item, so we need to let SharePoint know. That is where the X-HTTP-Method comes in. Many firewalls block anything other than GET or POST via HTTP traffic. So we use this header for updates and deletes.
The IF-MATCH header makes the POST conditional. Check here for an explanation. Because we are saying this is an update to an existing entity, we want to ensure that if there isn't a matching entity, the POST will fail.
Finally we come to the payload string. This is the JSON representation of the update object.
We first specify the type in the __metadata object, next we specify the column name to be updated, then the value of that column. We put a length header to ensure proper formatting and security, then create a Stream to send the JSON home.
Execution happens with the GetResponse method.
THAT'S IT!! Kind of a lot to go through, but a good bit of code to have. I will be refactoring this in to a REST service to use with moving files in workflows.
Thanks to Andrew Connell. He initially showed me the way with his GitHub post on how to connect to Office 365. I have taken several classes with Andrew, and he is very free with answers to questions.
Thursday, October 16, 2014
SharePoint 2013 App Web And 404 Errors
Holly cow, it has been a while since I posted something. I have been doing much more management than tech stuff as of late, so I haven't had a lot to comment on. HOWEVER, now that my migration to SharePoint 2013 is done and I am starting to create Apps for SharePoint 2013, I have some gotchas to talk about.
I have an On Prem farm, complete with two WFEs, two Application Servers, two Office Web App Servers, two Database Servers, and now two IIS servers for provider-hosted SharePoint Apps.
Configuring you farms for Apps is actually pretty straightforward, but there are a lot of moving parts. Microsoft has really come through with some excellent documentation on how to get everything working. With an On Prem solution, you do have to make sure that you configure your certificates correctly, otherwise S2S auth will not work.
The gotcha that really tripped me up was the configuration of an app domain. What SharePoint does when you deploy a SharePoint hosted app or a provider hosted app with a SharePoint hosted component, is create a sub web off of the web where you are installing your app. This sub web is called the "App Web" and requires you to do some configuration. That configuration is well documented here. What that documentation DOESN'T tell you is that there are some things that you need to consider before pointing your app domain to your existing SharePoint DNS entry. There are some IIS considerations that you need to make.
IIS happily sits on your windows server listening for HTTP traffic. When you configure a new web site, you need to be specific with your IIS binding to make sure that two sites do not conflict. The easiest way to do this, without adding IP addresses to your IIS servers and because we want to have multiple sites listening on ports 443 or port 80, is with the use of Host Headers. The Host Header will tell IIS that any HTTP traffic coming in on a particular port, that has the DNS name of SharePoint.MyCompany.com goes to the correct web site. Most SharePoint deployments make use of Host Headers as they want to host both the Primary SharePoint Web Application and the MySites web application on the same servers in the farm.
Now, when you configure your app domain in DNS, you point the app domain to your primary SharePoint deployment with a wildcard CNAME alias. Here comes the fun part... Your app will have the URL of prefix-APPID.myappdomain.com.
If a wildcard alias points your traffic from prefix-APPID.myappdomain.com to SharePoint.MyCompany.com, AND you have IIS with a host header configured for SharePoint.MyCompany.com, the traffic from prefix-APPID.myappdomain.com will be ignored. Not only will it be ignored, it might get picked up by another web site... Such as the Default Web site that is configured without a host header, and set to respond to any IP address on the server by default. In that case, you will receive a 404 error when attempting to connect to your app, because the default web site is not where you deployed your app web.
Therefore you MUST remove the host header from your primary SharePoint IIS site. This, likely, will cause a conflict with your Default web site, so you must either stop that site, or remove it. What a pain!!
Now, if you have a provider hosted app, that uses a SharePoint hosted app web for authentication and other reasons, you will get strange errors that will have you chasing down the Cross-Domain library rabbit hole. Check your config first to be sure you are going the right direction!
I have an On Prem farm, complete with two WFEs, two Application Servers, two Office Web App Servers, two Database Servers, and now two IIS servers for provider-hosted SharePoint Apps.
Configuring you farms for Apps is actually pretty straightforward, but there are a lot of moving parts. Microsoft has really come through with some excellent documentation on how to get everything working. With an On Prem solution, you do have to make sure that you configure your certificates correctly, otherwise S2S auth will not work.
The gotcha that really tripped me up was the configuration of an app domain. What SharePoint does when you deploy a SharePoint hosted app or a provider hosted app with a SharePoint hosted component, is create a sub web off of the web where you are installing your app. This sub web is called the "App Web" and requires you to do some configuration. That configuration is well documented here. What that documentation DOESN'T tell you is that there are some things that you need to consider before pointing your app domain to your existing SharePoint DNS entry. There are some IIS considerations that you need to make.
IIS happily sits on your windows server listening for HTTP traffic. When you configure a new web site, you need to be specific with your IIS binding to make sure that two sites do not conflict. The easiest way to do this, without adding IP addresses to your IIS servers and because we want to have multiple sites listening on ports 443 or port 80, is with the use of Host Headers. The Host Header will tell IIS that any HTTP traffic coming in on a particular port, that has the DNS name of SharePoint.MyCompany.com goes to the correct web site. Most SharePoint deployments make use of Host Headers as they want to host both the Primary SharePoint Web Application and the MySites web application on the same servers in the farm.
Now, when you configure your app domain in DNS, you point the app domain to your primary SharePoint deployment with a wildcard CNAME alias. Here comes the fun part... Your app will have the URL of prefix-APPID.myappdomain.com.
If a wildcard alias points your traffic from prefix-APPID.myappdomain.com to SharePoint.MyCompany.com, AND you have IIS with a host header configured for SharePoint.MyCompany.com, the traffic from prefix-APPID.myappdomain.com will be ignored. Not only will it be ignored, it might get picked up by another web site... Such as the Default Web site that is configured without a host header, and set to respond to any IP address on the server by default. In that case, you will receive a 404 error when attempting to connect to your app, because the default web site is not where you deployed your app web.
Therefore you MUST remove the host header from your primary SharePoint IIS site. This, likely, will cause a conflict with your Default web site, so you must either stop that site, or remove it. What a pain!!
Now, if you have a provider hosted app, that uses a SharePoint hosted app web for authentication and other reasons, you will get strange errors that will have you chasing down the Cross-Domain library rabbit hole. Check your config first to be sure you are going the right direction!
Monday, February 3, 2014
Creation of Content Types - Rant
The creation of content types is a long, tedious process that nobody likes to do. Especially the users for whom's lives will be made easier because of them. But no one ever wants to participate. Most of the time users get upset. "I don't have time for this!!" Is a common excuse. The snarky remark that I always have to bite back, is "But you have time to search through endless, meaningless documents clumped together in a mess?"
I bite back the remark, because the user is normally only thinking of the workload they have in front of them. The questions I ask pull time away from that workload. The delayed gratification of using the content type to simply information retrieval isn't seen. The questions I ask require a bit of thinking outside the box. What I have found is that people HATE thinking outside the box. They like the box very much.
What really really really gets me is that this type of work exposes all of the office behavior that I detest. The office suckups and yes people will jump on their manager's band wagon. The tattle-tales will complain to their managers, or complain to mine that I am taking time away from work. The lazy will just wait around until someone else does it. The unimaginative will simply hand back what you already know or what you have already told them. And the resistant to change will throw a fit, because this isn't how they have done it in the past.
It boils down to the seemingly universal thought that it is better to put out fires than to do the legwork to prevent the fires from starting in the first place. It also has to do with the fact that most people do not like to think about anything that is more than one layer deep. It is the same thinking that makes everyone want to shove all links to all pages on the "home" page of intranets.
*SIGH*
Back to the grind.
I bite back the remark, because the user is normally only thinking of the workload they have in front of them. The questions I ask pull time away from that workload. The delayed gratification of using the content type to simply information retrieval isn't seen. The questions I ask require a bit of thinking outside the box. What I have found is that people HATE thinking outside the box. They like the box very much.
What really really really gets me is that this type of work exposes all of the office behavior that I detest. The office suckups and yes people will jump on their manager's band wagon. The tattle-tales will complain to their managers, or complain to mine that I am taking time away from work. The lazy will just wait around until someone else does it. The unimaginative will simply hand back what you already know or what you have already told them. And the resistant to change will throw a fit, because this isn't how they have done it in the past.
It boils down to the seemingly universal thought that it is better to put out fires than to do the legwork to prevent the fires from starting in the first place. It also has to do with the fact that most people do not like to think about anything that is more than one layer deep. It is the same thinking that makes everyone want to shove all links to all pages on the "home" page of intranets.
*SIGH*
Back to the grind.
Wednesday, January 15, 2014
Installing SharePoint 2013 Prerequisites On Windows Server 2012 R2
As of right now, the SharePoint 2013 Prerequisite Installer is not supported on Windows 2012 R2. Awesome.
So, here is a way to get around it, but not have to do anything manually.
You first need to ensure that you can run PowerShell scripts on your server. Start a PowerShell session as an administrator and type:
Set-ExecutionPolicy unrestricted
You should know that this will disable all warnings for all PowerShell saved scripts. You should be careful as to what you deploy on your server. Buyer beware.
Now, download all of the prerequisite files:
SQL Server 2008 R2 SP1 Native Client
Microsoft WCF Data Services 5.0
Microsoft Information Protection and Control Client (MSIPC)
Microsoft Sync Framework Runtime v1.0 SP1 (x64)
Windows Identity Extensions
Windows Identity Foundation (KB974405)
Windows Server AppFabric
CU 1 for AppFabric 1.1 (KB2671763)
Put these in a folder, and remember where you put them.
Next you are going to create two PS1 files. The first one will install the Windows Services that you need. The second one will install the Microsoft requisite files.
The first script is fairly simple:
All this does is call the Add-WindowsFeature of the ServerManager module. Just run this guy as an administrator and you are good to go. You will need to restart after the installs... Well, maybe you don't, but I always like to. Anyway, at the end I warn the user that the server will restart in 5 seconds, then I restart it.
The next script uses a couple of different commands. The first command, Unblock-File gets rid of that annoying "Do you really want to run this file?" message that Windows likes so well. We just want to install we don't want to click anything.
The next command is Start-Process. Start-Process is a nifty cmdlet that allows you to enables you start a program, pass it arguement parameters, then tell PowerShell to just hang out until the process completes.
So, what I do here is ask the user where the install files are, that should be the folder that you saved the prereqs that you downloaded earlier in, then runs the installs in the correct order, with the correct arguments. This is very important, especially with the App Fabric setup.
I then restart the server again. You might need it, you might not, I like to because it is a good idea after installing so much stuff, and before you are going to do the BIG install of SharePoint.
That's it! You can either run the manual install from here or run another kind of scripted install. Everything should work, unless the script tries to run the PrerequisiteInstaller.exe program...
So, here is a way to get around it, but not have to do anything manually.
You first need to ensure that you can run PowerShell scripts on your server. Start a PowerShell session as an administrator and type:
Set-ExecutionPolicy unrestricted
You should know that this will disable all warnings for all PowerShell saved scripts. You should be careful as to what you deploy on your server. Buyer beware.
Now, download all of the prerequisite files:
SQL Server 2008 R2 SP1 Native Client
Microsoft WCF Data Services 5.0
Microsoft Information Protection and Control Client (MSIPC)
Microsoft Sync Framework Runtime v1.0 SP1 (x64)
Windows Identity Extensions
Windows Identity Foundation (KB974405)
Windows Server AppFabric
CU 1 for AppFabric 1.1 (KB2671763)
Put these in a folder, and remember where you put them.
Next you are going to create two PS1 files. The first one will install the Windows Services that you need. The second one will install the Microsoft requisite files.
The first script is fairly simple:
Import-Module ServerManager Add-WindowsFeature Net-Framework-Features,Web-Server,Web-WebServer,Web-Common-Http,Web-Static-Content,Web-Default-Doc,Web-Dir-Browsing,Web-Http-Errors,Web-App-Dev,Web-Asp-Net,Web-Net-Ext,Web-ISAPI-Ext,Web-ISAPI-Filter,Web-Health,Web-Http-Logging,Web-Log-Libraries,Web-Request-Monitor,Web-Http-Tracing,Web-Security,Web-Basic-Auth,Web-Windows-Auth,Web-Filtering,Web-Digest-Auth,Web-Performance,Web-Stat-Compression,Web-Dyn-Compression,Web-Mgmt-Tools,Web-Mgmt-Console,Web-Mgmt-Compat,Web-Metabase,Application-Server,AS-Web-Support,AS-TCP-Port-Sharing,AS-WAS-Support, AS-HTTP-Activation,AS-TCP-Activation,AS-Named-Pipes,AS-Net-Framework,WAS,WAS-Process-Model,WAS-NET-Environment,WAS-Config-APIs,Web-Lgcy-Scripting,Windows-Identity-Foundation,Server-Media-Foundation,Xps-Viewer –Source D:\sources\sxs Write-Output "Windows features installed. Computer will restart int 5 seconds." Start-Sleep -seconds 5 Restart-Computer -Force
All this does is call the Add-WindowsFeature of the ServerManager module. Just run this guy as an administrator and you are good to go. You will need to restart after the installs... Well, maybe you don't, but I always like to. Anyway, at the end I warn the user that the server will restart in 5 seconds, then I restart it.
The next script uses a couple of different commands. The first command, Unblock-File gets rid of that annoying "Do you really want to run this file?" message that Windows likes so well. We just want to install we don't want to click anything.
The next command is Start-Process. Start-Process is a nifty cmdlet that allows you to enables you start a program, pass it arguement parameters, then tell PowerShell to just hang out until the process completes.
param([string] $SharePoint2013Path = $(Read-Host -Prompt "Please enter the directory path to where your SharePoint 2013 pre-req installation files exist.")) $CurrentLocation = $SharePoint2013Path Unblock-File -Path $SharePoint2013Path\MicrosoftIdentityExtensions-64.msi Start-Process -filepath $SharePoint2013Path\MicrosoftIdentityExtensions-64.msi -ArgumentList "/passive" -Wait Unblock-File -Path $SharePoint2013Path\setup_msipc_x64.msi Start-Process -filepath $SharePoint2013Path\setup_msipc_x64.msi -ArgumentList "/passive" -Wait Unblock-File -Path $SharePoint2013Path\sqlncli.msi Start-Process $SharePoint2013Path\sqlncli.msi -ArgumentList "/passive IACCEPTSQLNCLILICENSETERMS=YES" -Wait Unblock-File -Path $SharePoint2013Path\Synchronization.msi Start-Process -filepath $SharePoint2013Path\Synchronization.msi -ArgumentList "/passive" -Wait Unblock-File -Path $SharePoint2013Path\WcfDataServices.exe Start-Process -filepath $SharePoint2013Path\WcfDataServices.exe -ArgumentList "/passive /norestart" -Wait -Verb RunAs Unblock-File -Path $SharePoint2013Path\WindowsServerAppFabricSetup_x64.exe Start-Process -filepath $SharePoint2013Path\WindowsServerAppFabricSetup_x64.exe -ArgumentList "/i CacheClient,CachingService,CacheAdmin /gac" -Wait -Verb RunAs Unblock-File -Path $SharePoint2013Path\AppFabric1.1-RTM-KB2671763-x64-ENU.exe Start-Process -filepath $SharePoint2013Path\AppFabric1.1-RTM-KB2671763-x64-ENU.exe -ArgumentList "/passive /norestart" -Wait -Verb RunAs Write-Output "Complete. Server will restart in 5 seconds" Start-Sleep -Seconds 5 Restart-Computer -Force
So, what I do here is ask the user where the install files are, that should be the folder that you saved the prereqs that you downloaded earlier in, then runs the installs in the correct order, with the correct arguments. This is very important, especially with the App Fabric setup.
I then restart the server again. You might need it, you might not, I like to because it is a good idea after installing so much stuff, and before you are going to do the BIG install of SharePoint.
That's it! You can either run the manual install from here or run another kind of scripted install. Everything should work, unless the script tries to run the PrerequisiteInstaller.exe program...
Subscribe to:
Posts (Atom)